Granted, this is old news by now (I'd been meaning to comment on it last week, but somehow
things got--or stayed--busy), but I was reading a
press release from
Lineox that they've added
Global File System (GFS) support. It's been a while since I've looked into this sort of
thing. For starters, it looks like RedHat went and gobbled up
Sistina
while I wasn't paying attention.
I've implemented cluster file systems for test purposes before, but whenever I've looked
into the implementations previously available, there were always enough "gotchas" that
I didn't think they were practical for putting into service out here.
Even so, there's an inherent coolness to a fault-tolerant "floating" storage pool that's
not tied to a single server and supports equal and simultaneous access from whatever
machines are authorized to connect to the data.
Since the aforementioned Sistina gobblement, RedHat will now sell you their package for
a mere $2,200 (Red Hat Enterprise Subscription required). I've generally had the impression
that Red Hat wants to be the Microsoft of Linux, and at their current enterprise pricing
structure, I'd have to spend some more time with the back of an envelope to figure out
whether Red Hat is still cheaper than going with some flavor of Microsoft's Advanced
Server or Datacenter product line.
Which brings us back to Lineox, who offers what is essentially a clone of RedHat for the more
cheapskate-friendly price of ten euros. That leaves only the "gotcha" that I bet it's still
like using Red Hat, but these days I imagine everybody but me would consider that to be a feature.
Back when I'd done may playing with cluster filesystems, I'd implemented them on a plain-old
wide (single-ended at that point) SCSI bus, which is fundamentally limiting both because of the
number of available IDs once you start parcelling them out between storage devices and host
adapters, and because of the limited physical size that's practical with single-ended SCSI.
Nowadays, however, fiber channel--at least the 1Gbps flavor--is cheaper than dirt. I've got a
few hundred fibre drives, up to 73GB, and plenty of switches and host adapters. Only thing I
don't have is the spare time to play with fun stuff like this. It's sad, but true, I don't really
need to implement a shiny new cluster file system for my house and even a few racks of 73GB
fibre drives on a SAN, while cool, is not nearly as practical for the video work that I do as a
single, simple fileserver running an array of 250GB IDE drives ($129 apiece this week at CompUSA,
no rebates and no limits).
It's still tempting, but I know it's a project that would take quite a bit of time and not really
get me any significant practical benefits over the fileserver I'm setting up now.