Friday, May 29, 2009

Apache, PHP, and PostgreSQL on MacOSX Leopard

Z O M G

By default, the vanilla PHP that ships with MacOS X doesn't support PostgreSQL.

:(

How hard could this be?  Compile PHP and change the modules files for Apache, right?  So, I tried this.  Which, BTW, wasn't a hassle-free thing to do.  Many many thanks to "fatalerror" at WordPress for this fantastic guide which got the ball rolling for me.

After the install, I noticed that Apache simply won't load the PHP module, because the dynamic library that the PHP-compile built is for i386, and Apache was compiled by Apple for x86_64.

:((

Many hours later of trying to get the PHP-compile to produce x86_64 targets, I gave up, and decided to go whole-hog; i.e., not just replacing PHP, but by compiling the world for i386--at least, enough of the world for Apache 2.2.11, PostgreSQL 8.3.7, and PHP 5.2.9 (including GD and GNU Readline).

:(((

Since I'm working with other Mac developers, all whom will need this solution, I decided to make a tarball...Which meant making the installation as hassle-free as possible.  Since this stuff has to be done as root, (to install to /usr/local, etc), I wanted to make sure I wasn't doing anything that couldn't be reviewed by someone downloading my bundle.  So, I decided to write a pretty easy-to-read Makefile.

Sounds good...

Then I got started...

Turns out that although most packages compile with a simple ./configure, some don't.  And, in those cases, we have to patch some Makefiles here and there; so that they'll, for example, install directories (shame on jpeglib for shoddy install work!!).  Of course, that meant patching things in a way that configure will understand.

:/

But, after many more hours of work, I got the whole apache/php/postgres world to compile, in an almost-fully-automated fashion on MacOS X.  A simple phpinfo test was successful, and revealed that all the pieces were joined the way I wanted.

:)))  --  w00t

The rest of the patches are for allowing the new Apache to be started and stopped through System Preferences (!), and configuring the new Apache once the new Apache is installed.  This includes turning on support for user-directories (via ~/Sites, per Apple's way), using Apple's users for running httpd, (why redo *ALL* those perms?), and using custom log files, so that in case you ever want to revert back to the vanilla apache, log files aren't lost.

:)

There is a tiny of bit of interaction at the end of the install, for the PEAR install.  Just press [Enter] to accept the defaults (a couple of times, IIRC).  That's it.  Read the README file in the tarball; it will tell you how to test it.  Test it--esp. before you comment, if you're going to do that.

:/

The feature I like best is the one that allows this new Apache to be started and stopped through System Preferences.  It just involved editing one line in /System/Library/LaunchDaemons/org.apache.httpd.plist to point to the new binary.  If you ever need to go back to the vanilla Apache, make the obvious change.  If this fact daunts you, don't install this package--you don't know what you're doing.

;)

* * *

If I haven't scared you away, here a tiny tarball with my solution: epic-bundle.tar.gz.

Enjoy!!



(p.s. I don't take any responsibility for mucking up your machine--all this stuff runs as root (via sudo)--read the Makefile, understand what it does, and figure how to undo it before you use it!!)

Tuesday, April 28, 2009

Eclipse & SVN

What a bear Eclipse/Subversive/SVN is.  When using svn+ssh with Eclipse, there is a file which must have group-writable permissions:

/path/to/svn/db/rep-cache.db

This took a long time to debug.  I hope that none of you have this insane problem, but if you do, the symptom of this problem is that Eclipse will not allow you, using Subversive, to commit files.  Something about writing to a readonly database.

Saturday, April 25, 2009

Website Development Trials

So much to say, since I've hated website development from the moment I laid eyes on Perl/Kerberos.

Problem 1: Choosing an IDE

This has been an epic decision...In the end, I've settled on a PDT-Eclipse-Zend Debugger all-in-one from Zend.  This doesn't use Zend Studio, which is a $400 product that doesn't appear to do anything significantly better than PDT--as least for the stuff I plan to use it for.

I've had to do a lot to get things working, though, on both the client and server sides.

* Get the all-in-one package.

* Install Eclipse plugins JSEclipse (from Adobe Labs), Subversive (from eclipse.org), SVN connectors (from Polarion).

* Install SVN on the server (1.6.x) from source.  No big deal.

* Install SVN on the client (1.6.x for MacOS X).  Not a big deal once you find the binary .dmg file.

* But, to get the server-side SVN installation working, I had to get SQLite...So I did.  And installed it from source.  Again, no biggie.

* Then, SVN would compile and install.

[We now take a break so I can rant about why I chose the Zend-provided PDT all-in-one...]

So, the problem with Zend is that it didn't offer too much that was a big deal.  Sure, it might be nice to have custom PHP-formatting.  But that doesn't seem like a $400 advantage.  Then, using bare Eclipse/Ganymede and trying to install PDT was terrible, because some of the dependencies for Zend Debugger were broken, and WTF good is PDT without the Zend Debugger?

Exactly.

So, I tried getting the PDT all-in-one from Zend.

That worked.  So, I'm sticking with it.

[...and now we return to our regularly scheduled program.]

JSEclipse is nice, so it seems.  But, you had to hook up JS files to be opened with JSEclipse.  Again, not too bad, once you find the install directions.

Then, I tried installed the Subversive plugin.  I'm familiar with Subclipse (from tigris.org).  So, I figured, how hard could Subversive be?  Ha!  Well, after installing Subversive, I wasn't able to connect to any SVN repositories because the "SVN Connector" was missing.  I had to hop over to polarion.com to get the connector plugins.  And, even then, it didn't work.  No one mentioned that the pure-Java SVNkit was the addon I needed.

Finally, after all of that, I can connect.  One feature I like so far is the ability to "automagically" check out only the trunk of a project, because it will "detect project structure" and do "smart things" if it seems the usual TTB directories (trunk, tags, branches).

Thursday, January 8, 2009

Editing A2Bn Transforms

Basically, after quite some time, several years, in fact, in the color management universe, I finally got around to editing my profiles.  I tried this a few years ago, and didn't get anywhere.  I just remember getting a preview that was completely broken (and appeared super-dark with only dark-red tints).  I gave up on that for a while, and just settled for the fact that the soft-proof was really only an interpretation, and learned to adjust my "seeing" to accommodate the differences between the soft-proof and print.

No more.

There are a few things that can be done:

  • Adjust print profile to make print lighter, to match the on-screen soft-proof.
  • Adjust print profile to make soft-proof much darker.
  • Adjust monitor profile for a much higher gamma value (e.g., greater than 2.2).

I ended up doing the second, but not before trying the first.  Here's what important to remember.  When the printer can print high density blacks, and when the CMM uses Black Point Compensation, (apparently, something both Adobe and Qimage can do), there's a damn good chance that the soft-proof will appear too bright.

First, I adjusted the reverse transform to make the print lighter.  This made some amount of sense.  But, I wasn't able to tweak the image that much in the transform.  And, I realized that I wasn't matching the print to the soft-proof; I was just matching the print to its appearance in LR2.  Which is stupid, since LR2 doesn't have soft-proof capabilities, and isn't really appropriate for final image correction anyway (which is kinda revolting, when you think about it).

While editing the B2An transform, I chose the perceptual intent.  That made sense, since I was printing with the perceptual intent anyway (though that's another issue I have yet to really explore).

Then, I learned that editing the forward transform, the A2Bn transform, I could change just the soft-proof appearance, without altering the printed output.  So, I cranked down the lightness curve in the forward transform.

FAIL.

It took a few minutes to understand what was happening.  It quickly dawned on me that while editing the A2Bn transform, I chose perceptual because I had done it for the B2An transform.  A moment's thought reveals the flaw.  The CMM take the image, and uses the embedded profile to go from embedded (e.g., ProPhoto RGB) to PCS (e.g., LAB).  Then, it uses the reverse transform to go from PCS to Printer space (i.e., the space described by the printer profiles I've been editing all this time).  This is how a print gets made.  It's why editing the Perceptual intent works when I'm printing, using the Perceptual intent.

But, why doesn't soft-proofing work when I'm editing the A2B0 (perceptual) profile/intent?  Simple.  A soft-proof is a colorimetric rendering of a device's color space.  So, on a soft-proof, we use forward from embedded to PCS, then reverse from PCS to device, then forward again from device to PCS, and finally, reverse again from PCS to monitor space.

It's the 3rd step that's the doozy; the reverse transform from PCS to device probably ALWAYS uses the colorimetric intent, because it is trying to simulate the color EXACTLY within the monitor's available gamut (even with white-point compensation, that's still the relative colorimetric intent).

Still an open question: does PS CS3 use the "Monitor Desaturation" when soft-proofing...?

So, when I was editing A2B0, I was doing something that made no sense from the soft-proof perspective; i.e., trying to render an accurate picture of the device space in monitor space perceptually, not colorimetrically.  Once I changed to editing the A2Bn profile using the colorimetric intent, my soft-proofs were changing accordingly.

Result?  Prints that not only match in hue, but also now in lightness.

:)))

Corollary?  LR2 (and raw workflows) are still cumbersome.  Why soft-proofing wasn't built into LR from conception is a flawed concept.  One can only hope that the image is being color-managed; i.e., the custom ProPhoto-like space (with a 2.2 gamma) is going into LAB and then back out to monitor space.  I suspect that's the case, since PS CS3 shows the image virtually the same (2 ACDs, with slightly different panels and profiles).

Still, this going out to PS CS3 to make the final color/lightness edits, and then going back to LR and *THEN* exporting to Qimage in Parallels is awful.  Why can't Adobe buy Qimage, and put their interpolation into LR2?  Why can't Apple allow MacOS X to let the application control printer color management if it wants to?  Why can't Epson make drivers that have a truly "expert" setting, which would essentially allow you to just telnet to a port, and stream data to it, where that data stream simply describes the image at the max allowed resolution, and the prints just prints at the (x,y) you want, at that resolution, with the pixel data you give it?

Finally, why can't LR2 allow destructive edits to happen by just creating yet another virtual copy, and applying the edit?  They could be flagged in a special way that simply managed the asset while indicating that the file is now out of raw space, and into raster space?  And, allow plug-ins to operate using this mechanism, the way Aperture does?  One can only hope customer pressure will eventually create a single solution for managing images that don't require extensive editing.

Well, what is 'extensive' you ask?  Where is that line?  Well, if I were to asked to make that decision, it's simple: design the API to enable edits that affect the entire image.  Meaning, whatever decision is being made, it must be made to the entire image.  Of course, once you allowed per-pixel editing, people could use the mechanism to implement things like layered edits, etc.  And, that would subvert the "purity" of the LR2 raw-only workflow.

My rebuttal?  "So the frak what?!?!?!?"

I want to never leave my asset management tool, because I want the editing functions to be built-in.  If that makes my tool lose some of its design purity, so be it.  I want a seamless workflow, from import to print.  Taking the CF card out of the camera and putting it into the reader on my desk is the last inconvenience I should have to deal with before making a print.

There--I've said it.  I hope it keeps Narayen up at night.

Profiling Apple Cinema Display with MonacoPROFILER, Part 2

Ugh.

Creating a ICCv4 file made MacOS X or the monitor really unhappy.  Left big horizontal red streaks whenever I scrolled an application, whether it be TextEdit or LR2.

:(((

It took a while to figure out, but I suspected that it might be either the version of the profile, the use of the 3D LUT (vs just the matrix transform profile), or the "gamut compression" flag.  I tried them all, and in the end, just saving the profile as an ICC Version 2  profile worked just fine, while preserving the color temp and gamma.

:|

Thursday, January 1, 2009

Profiling Apple Cinema Display with MonacoPROFILER

Again, not too bad.

Used basic profiling mode (no characterization before profiling) on the 30".

Then, using the white luminance, matched the 23".  This took 2 tries.  First try was way too bright.  Turns out, shooting for about 2 cd/m^2 above the target, and having the profile tone it down (due to gamut compression in the luminance axis) hit just about perfectly, maybe low by about a candela.  Pretty good for eyeballing the starting point.

The profile has done weird things to the 23".  I think ir broke the font AA or something, because everything looks blocky.

Still really want that Eizo, thought can't entirely justify it right now on technical grounds...still...

Print-matching is still an issue, but I have yet to try tuning the output profile by hand, basically by either making the monitor's gamma compress the dark regions, or by lightening up the print.  This might be a reasonable argument for the eizo, if I can get that far.  Turns out the Ilford GSP profiles are pretty good; they lighten up the print without destroying the dense regions of the image.  Requires more investigation...

Wednesday, December 31, 2008

3ware 8506-4LP degraded RAID array rebuild (R10)

Another recent technology victory.

I didn't want to try a hot-swap.  Too important.

# tw_cli
> info c0

Shows that disk in port 2 (/c0/p2) is DEGRADED.  I wrote down the serial number of the degraded disk, in case the hardware port to driver port mapping was criss-crossed.

Nope.  Port 2 (3rd one down: 0, 1, then 2) actually housed the "bad" disk.  Put in the new disk.  Used the power-screwdriver at the colo (Bravo, HE) to mount the new one in the housing, and put it back in the array.

:)

Rebooted.  Alt-3, and selected the new disk (not a part of any array), and the degraded array.  This put an asterisk next to both, then tabbed down to [Rebuild Array].  It marked the array for a rebuild--this part was confusing--and said it would start on [F8].  I assumed (luckily correctly) that it was wanting me to put F8 to start the rebuild.  When I did, it continued the boot.

This is the part that seems like it should require more explicit direction or description.  I mean, clearly, if you reboot here, that would seem bad, unless the controller wrote something to the firmware.  I guess that makes sense.  Still, it would be nice if it told you that no matter what you did, it would be safe.  Still...

:/

Ok...Linux boots.  fsck has its fun.  And, when I get the shell back, I run tw_cli again.  It says the array has status "REBUILDING".

:)

I remembered something about being able to reset priorities during a rebuild...

# tw_cli
> set rebuild c0 1

The range of the last argument is from 1 to 5, inclusive.  5 is the lowest, prioritizing I/O over the rebuild.  1 is the highest, prioritizing the rebuild over I/O.  3ware FTW.

:))