Many cluster applications require, or significantly benefit from, shared storage so what is the easiest way to do shared storage? Whether you want High Availaiblity in VMware or distributed computing on a Beowulf cluster, sharing storage across nodes is useful if not essential.
The first question you need to ask is whether you can do direct connect or not. Directly connected storage is going to be simpler and more reliable but limits the number of connected nodes and the distance from the nodes to the storage. There's SCSI or SAS storage which is going to be fast and reliable. Dell sells an array, the PowerVault MD3000/3200 series, which you can connect up to 4 hosts on each of 2 controllers so either 4 hosts with redundant connections or 8 hosts with non-redundant connections. The limit of a SAS cable length is 10m (steadfast.net) so if you need longer runs, you start looking at Fibre Channel or Ethernet connections.
Fibre Channel is well established and with link speeds at 4Gbps and 8Gbps these days it's fast. But it's expensive and suffers from the complexity issue that unless the stars align and you sacrifice a pig, nothing will work.
iSCSI is suprisingly popular. Encapsulating SCSI devices / commands inside an IP packet seems kind of like a lot of overhead and it really is. ATA over Ethernet (AoE) makes more sense as you're only going so far as layer 2 - what kind of mad person would run shared storage across separate subnets which need to be routed? It just seems like a big slap in the face for storage when you could use a regular file server protocol like NFS.
Ah, NFS, you've been around for a while. Frankly, it doesn't get any easier or cheaper than NFS. Any system can share NFS - every UNIX flavour and GNU/Linux distribution will come with an NFS server out-of-the-box.
It's a one-liner config and you're off to the races. Put LVM on there and you can expand sotrage as needed, do snapshots, heck add DRBD if you want replication.
You can run NFS (or iSCSI) on any Ethernet connection from just sharing the main interface on the system to dedicated redundant 10Gbps ports.
NFS is well established with plenty of support for optimazation to get over a lot of the limitations of TCP.
Backups for the data, take your snapshot (LVM or VMware, etc) and a plain old cp / rsync will be sufficient to get the files you need off there.
And there's one other benefit of using NFS over others - if you have a lot of concurrency like many virutal machines or compute nodes accessing storage simulataneously, there is (usually) only 1 lock per storage device with the block storage protocols whereas NFS is one lock per file.
But NFS certainly will require at least the addition of a switch and potentially several and some complexity with redundant network links etc etc it can also suffer from the same complexity issue that unless the stars align, you're going to have a bad time.
So in short, NFS is hella awesome and you should use it wherever you can. Depending on the scale of what you're doing, directly connected storage is going to be simpler and more reliable, possibly cheaper.
Sunday 4 November 2012
Subscribe to:
Posts (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...