I recently have tested out running Heartbeat (finally, took too long to get to this, but that's another story). This is a cluster resource manager (CRM) which polls nodes in a cluster and brings resources up when a node failure is detected.
It's interesting. I wouldn't call it elegant really, maybe the newer Pacemaker would seem cleaner. But it is simple and at least in testing it is effective especially when combined with DRBD which I posted on earlier. The thing is where DRBD really seems built for top-notch resiliency and flexibility, Heartbeat seems it will work, but it's not obvious that you'll get what you expected - maybe it's just the documentation on DRBD was really well done.
At any rate, there is great documentation on getting Heartbeat up with DRBD both from the CentOS wiki and from DRBD. I used heartbeat with drbd83 in CentOS.
What Heartbeat will do is listen for heartbeats from a peer node in a cluster and if a peer goes down, it will bring up services on the working node. There's a handful of important things about this to keep in mind.
First is the heartbeat - this is just a stand-alone network connection between two nodes so if that connection goes down or the heartbeats get choked out by competing traffic, Heartbeat may well decide you have a node failure. This is not a trivial problem because now that Heartbeat can kill services on an active node, this is potentially an new point of failure. And this is common to many HA configurations including DRBD itself though as we know, it will identify split-brain and gives you some recourse for repairs. So the suggestions here are to use a dedicated connection, preferably a point-to-point connection with a cross-over cable or a serial port - and this is not uncommon for clusters (the point-to-point connection) like in this white paper for Microsoft Storage Server.
Then there is the issue of resource management - when the CRM is managing the resources, the usual OS procedures should not. If Heartbeat is in charge of bringing up MySQL, you shouldn't be starting MySQL from the init scripts when the OS boots. Now the nice thing with DRBD is that it's behaviour is consistent with this paradigm - when DRBD resources start up, they are in "secondary" mode and cannot be accessed by the OS. So if you have a file share protected by DRBD, Samba wouldn't be started by the OS, and likewise, that file system would be unavailable when the OS starts (by default at least). So here, Heartbeat makes a lot of sense. You take a 2 node cluster for example, when the nodes start up, Heartbeat looks for the peer, picks someone to become active, and then would make "primary" the DRBD resource on that peer, mount the file system, start smb. On the stand-by node, you would both have 'smb' off and the file system would not be writeable which helps ensure consistency.
I guess I could go on about Heartbeat quite a bit, but there's one last thing to mention specifically here and that's the style of cluster. There are "R1" style clusters which are simple but limited to 2 resources (and other limitations) and then there are CRM enabled clusters which are more robust but more complicated to configure. I have only used R1 because it was sufficient for my needs - 2 nodes, one was known "preferred", keeping cluster configuration in sync "manually" wasn't onerous. But CRM enabled clusters are more interesting because you can add more nodes and it will distribute the cluster configuration automatically, etc.
The one thing I haven't really touched on is the quorum which others who are more familiar with cluster management will be more familiar with than I. Basically with Heartbeat in an R1 style cluster, there isn't going to be a quorum. Your configuration is maintained pretty much manually, services are only running on one node, etc. In CRM style Heartbeat or other application clusters, the quorum is what all the nodes agree on and typically stored in a file. On Windows Storage Server and other clusters, the quorum is stored on the shared disk which means any problem there means the cluster fails. With Heartbeat, the quorum file is copied among the nodes but this is susceptible to becoming out of sync like if there is a communication failure on the heartbeat channel leading to a split brain. Or this is my limited understanding of this. At any rate, it is a problem and it isn't trivial in working with active/active or multi-node configurations.
Thursday, 16 June 2011
Friday, 13 May 2011
Cache in Openfire
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that Openfire has. If you log on to the admin console of your Openfire server and go to the cache summary page, you can see what the usage and effectiveness of your various caches are. Specifically, I found that a couple caches were full - Roster and VCard. The Roster cache was limited to 0.50MB by default it seemed and it's effectiveness was less than 20% at the time.
It is a fairly common issue and it has been discussed in the Ignite Realtime forums. The solution posted is to set a couple system properties to override the default:
cache.username2roster.size
cache.vcardCache.size
Both of these are given in bytes. The post in that thread says to go to 5MB, I found that my Vcard cache didn't need to be much bigger than the default and the roster cache only needed 2 or 3 MB.
After changing this, both cache hit rates are closer to 90%.
Our system is very small (less than a couple hundred users total), so the effect is not big on regular usage. But well worth checking on your server as it is a quick and easy optimization.
It is a fairly common issue and it has been discussed in the Ignite Realtime forums. The solution posted is to set a couple system properties to override the default:
cache.username2roster.size
cache.vcardCache.size
Both of these are given in bytes. The post in that thread says to go to 5MB, I found that my Vcard cache didn't need to be much bigger than the default and the roster cache only needed 2 or 3 MB.
After changing this, both cache hit rates are closer to 90%.
Our system is very small (less than a couple hundred users total), so the effect is not big on regular usage. But well worth checking on your server as it is a quick and easy optimization.
Thursday, 5 May 2011
Again with the tapes
In a previous post I said that to get around devices changing their numbering, it was useful to use the /dev/tape/by-id instead of the generic /dev/nst0. Unfortunately, this is also imperfect I've just learned as the device which was previously "scsi-35000e11138aa0001-nst" this time came up as "scsi-35000e11138aa0001". And you can guess how gracefully the software handled that (not at all). Now I don't know if was a driver update (possible) or if the device was switched to a different SAS interface (also possible), or maybe just the gremlins. Whatever it was, once again, I had to reconfigure the software to find the new device ID.
Thursday, 10 March 2011
Flexible Storage Replication
I have recently been looking quite a lot at different storage setups including storage replication and have been so far mostly relying on running rsync to copy a file system to an appropriate secondary host. For large file systems - either with a lot of files or simply a lot of changing data, this is slow and resource intensive. Not really a problem in some cases, but very problematic if you want your secondary system to have very current data. If you want to cobble something together yourself from commodity hardware, DRBD is an excellent tool and very feature-rich.
First of all, I can't recommend the DRBD User Guide enough. It really lays out the features and usage not just of DRBD but also some common applications you would use alongside like LVM for storage management and Pacemaker and Heartbeat (and others) for clustering.
What DRBD is going to do is basically copy writes to a block device over the network to a replica device - this storage set is called a "resource". Generally, you will expect to have two nodes for each resource. During normal operation, you will have one "Primary" node and one "Secondary" node for each resource which logically indicates that one node is writing changes to the resource while the other is making a copy. DRBD is generally very slick in handling replication and the status of the nodes. First of all, when you configure the resource, you specify an IP address for the replication target and generally you are going to want this to be a separate network interface from your general data plane - for example a cross-over cable for point-to-point connection between the two nodes. If the replication path goes down, DRBD is basically going to mark at what point in time it happened and then keep track of which blocks changed since that point so when the path comes back up, it has a list of which blocks need to be transferred instead of having to resync the whole device. That's another thing - it does the whole device sync for you too when you create the device. And also, you get basically the same behaviour if your secondary node tanks, or if both nodes tank for that matter, or even the primary node.
Unless both nodes end up in a "primary" state during some overlapping time. So if you automatically bring up the secondary node in case of a primary failure with Pacemaker, for example, but the issue was a path failure and not a node failure, then both nodes may end up in "primary" state. Since DRBD is tracking when communication is disrupted, it will detect this problem - a "split brain". You get several options for manual resolution (I think automatic as well) including taking the changes of one node or the other, the node with the "most" changes, the node with the "least" changes, the oldest primary, the youngest primary... You may still be stuck losing some data - but you can keep both nodes in split brain and consolidate externally (e.g. if you have critical data like financial data where you can never drop a transaction).
DRBD supports three replication "protocols" called, intuitively, A, B, and C. "A" is asynchronous so writes to local storage device unblock after the local device finishes writing. "B" is "semi-synchronous" which unblocks after the data has reached the peer. And "C" which is "synchronous" so the write operation is only complete once the data is written to both devices. I was finding that "A" and "B" got me similar speeds and "C" was slower - but this is not very rigorous testing and my replication link was 100Mbps through a shared data plane.
One of the things about any of these replication options compared to rsync is that they are going to generally be much nicer on your memory. I find that when rsync scrapes the file system, this effectively nukes the OS's disk cache such that after rsync runs, users may notice it takes a while to "warm up" again. But, replication is not a backup - if a virus eats your files on your primary node, it will eat them on the secondary node synchronously or asynchronously - your choice.
If you are using LVM (and you should be, I've posted about LVM before, so have others), you'll wonder whether you layer DRBD on top of LVM or vise-versa. As Chef would say: Use DRBD on top of your LVs. Dramatic over-simplification aside, it does depend on what you are doing. If you are using LVM to carve up a pool of storage for example for virtualization and then want the storage layer to replicate your VMs, it may make more sense to create your DRBD volume from physical storage, then it will replicate the whole LVM structure to your replica node. But there's complications like ensuring LVM will even look at DRBD devices for PVs and managing size changes, etc. There's a time and a place for everything, and that's college.
Um, what else is awesome about DRBD? Offline initialization, "truck based replication" (a.k.a. sneakernet), replicate the node locally, ship it to the remote site, turn-up from there. DRBD Proxy (paid feature) for when you need to buffer replication for slow or unreliable network links. Dual-primary (for use with something like GFS) operation. 3 node operation by layering DRBD on top of DRBD.
Yeah, it's cool. It's Free and free. You can get it stock with Fedora and CentOS (probably Ubuntu and others, but haven't tried it yet).
And one last thing - you cannot mount a resource that is "Secondary". So if you are getting crazy error messages that you can neither mount nor even fsck your file system, it's probably in Secondary - don't bang your head against the wall, just do "drbdadm primary <resourcename>". Is clear?
First of all, I can't recommend the DRBD User Guide enough. It really lays out the features and usage not just of DRBD but also some common applications you would use alongside like LVM for storage management and Pacemaker and Heartbeat (and others) for clustering.
What DRBD is going to do is basically copy writes to a block device over the network to a replica device - this storage set is called a "resource". Generally, you will expect to have two nodes for each resource. During normal operation, you will have one "Primary" node and one "Secondary" node for each resource which logically indicates that one node is writing changes to the resource while the other is making a copy. DRBD is generally very slick in handling replication and the status of the nodes. First of all, when you configure the resource, you specify an IP address for the replication target and generally you are going to want this to be a separate network interface from your general data plane - for example a cross-over cable for point-to-point connection between the two nodes. If the replication path goes down, DRBD is basically going to mark at what point in time it happened and then keep track of which blocks changed since that point so when the path comes back up, it has a list of which blocks need to be transferred instead of having to resync the whole device. That's another thing - it does the whole device sync for you too when you create the device. And also, you get basically the same behaviour if your secondary node tanks, or if both nodes tank for that matter, or even the primary node.
Unless both nodes end up in a "primary" state during some overlapping time. So if you automatically bring up the secondary node in case of a primary failure with Pacemaker, for example, but the issue was a path failure and not a node failure, then both nodes may end up in "primary" state. Since DRBD is tracking when communication is disrupted, it will detect this problem - a "split brain". You get several options for manual resolution (I think automatic as well) including taking the changes of one node or the other, the node with the "most" changes, the node with the "least" changes, the oldest primary, the youngest primary... You may still be stuck losing some data - but you can keep both nodes in split brain and consolidate externally (e.g. if you have critical data like financial data where you can never drop a transaction).
DRBD supports three replication "protocols" called, intuitively, A, B, and C. "A" is asynchronous so writes to local storage device unblock after the local device finishes writing. "B" is "semi-synchronous" which unblocks after the data has reached the peer. And "C" which is "synchronous" so the write operation is only complete once the data is written to both devices. I was finding that "A" and "B" got me similar speeds and "C" was slower - but this is not very rigorous testing and my replication link was 100Mbps through a shared data plane.
One of the things about any of these replication options compared to rsync is that they are going to generally be much nicer on your memory. I find that when rsync scrapes the file system, this effectively nukes the OS's disk cache such that after rsync runs, users may notice it takes a while to "warm up" again. But, replication is not a backup - if a virus eats your files on your primary node, it will eat them on the secondary node synchronously or asynchronously - your choice.
If you are using LVM (and you should be, I've posted about LVM before, so have others), you'll wonder whether you layer DRBD on top of LVM or vise-versa. As Chef would say: Use DRBD on top of your LVs. Dramatic over-simplification aside, it does depend on what you are doing. If you are using LVM to carve up a pool of storage for example for virtualization and then want the storage layer to replicate your VMs, it may make more sense to create your DRBD volume from physical storage, then it will replicate the whole LVM structure to your replica node. But there's complications like ensuring LVM will even look at DRBD devices for PVs and managing size changes, etc. There's a time and a place for everything, and that's college.
Um, what else is awesome about DRBD? Offline initialization, "truck based replication" (a.k.a. sneakernet), replicate the node locally, ship it to the remote site, turn-up from there. DRBD Proxy (paid feature) for when you need to buffer replication for slow or unreliable network links. Dual-primary (for use with something like GFS) operation. 3 node operation by layering DRBD on top of DRBD.
Yeah, it's cool. It's Free and free. You can get it stock with Fedora and CentOS (probably Ubuntu and others, but haven't tried it yet).
And one last thing - you cannot mount a resource that is "Secondary". So if you are getting crazy error messages that you can neither mount nor even fsck your file system, it's probably in Secondary - don't bang your head against the wall, just do "drbdadm primary <resourcename>". Is clear?
Wednesday, 26 January 2011
Tape Devices for Amanda
I've found a few times now that my tape server can be a bit of a pain about tape devices. Generally, I have Amanda configured to use /dev/nst0 but the tape drive isn't always this device if I attach other devices (at least other drives). So rather than configuring the "nst0" and then changing it to "nst1" after a few days of realizing the backups aren't working for some reason, I've started using the "tape/by-id" device instead. So my amanda.conf now shows:
Is clear?
changerdev "/dev/tape/by-id/scsi-1IBM_3573-TL_00X2U78M1255_LL0" # tape device controlled by mtx
tapedev "/dev/tape/by-id/scsi-35000e11138aa0001-nst" # the non-rewind
Is clear?
Saturday, 22 January 2011
No more mail
One more service down - no more mail. All "real" email has been offloaded or canceled except for uro.mine.nu which has basically just been sacked. I've closed the ports for SMTP, POP, and IMAP. So now this is it, I'm down to just web applications that I'm hosting from home.
What I'd like to do is find a dirt-cheap web-host for this stuff. None of it is high volume - the old URO forums which is still used by some of my gamer buddies (I think - haven't checked in a while) and some personal blogs including this one, and I have a couple personal site type things up. iweb.ca is still offering hosting for $1.67 / mo so I'd like to give them a shot. We shall see, I'll try a couple different services over the next couple months.
What I'd like to do is find a dirt-cheap web-host for this stuff. None of it is high volume - the old URO forums which is still used by some of my gamer buddies (I think - haven't checked in a while) and some personal blogs including this one, and I have a couple personal site type things up. iweb.ca is still offering hosting for $1.67 / mo so I'd like to give them a shot. We shall see, I'll try a couple different services over the next couple months.
Sunday, 26 December 2010
One Less Service on Alia
Alia, the latest in the line of servers hosted at home has one less service to host today. I've sacked the DNS service which had in the past provided primary DNS for some the public domains I had used. However, those are all now hosted by the DNS providers. I cleaned up the Bind configuration and closed that port so that it no longer forwards in from the Internet.
The last thing it was doing was DNS for local LAN - the internal DNS to lookup the printer (mostly). This is easily handled by DNSMasq in DD-WRT which is basically a tick-box to replace everything that Alia was doing for DNS. And it automatically adds the lookups for statically configured DHCP hosts so I don't have to setup a host once on the router for DHCP and then again on Alia for DNS.
At this point, it looks like Alia will be the last server I host at home. I've offloaded Jabber and now DNS leaving SMTP and HTTP. SMTP is almost ready to go already as there's only one personal domain for one user using that and that user may retire the domain otherwise we can move to Google Apps along with the other email. And that will leave HTTP which, since I can get shared hosting for less than $2 a month, is an easy one to offload. Not free, but shutting off Alia, even as an energy efficient system (low-power CPU and everything), will save just over $2 / mo in electricity consumption.
So we're coming to the end of an era. It really goes to show just how greatly improved hosted services are today and also the breadth of features you can get from consumer products for home. To have all the trappings of a full network that is so easy to use and so cheap, it is really amazing.
The last thing it was doing was DNS for local LAN - the internal DNS to lookup the printer (mostly). This is easily handled by DNSMasq in DD-WRT which is basically a tick-box to replace everything that Alia was doing for DNS. And it automatically adds the lookups for statically configured DHCP hosts so I don't have to setup a host once on the router for DHCP and then again on Alia for DNS.
At this point, it looks like Alia will be the last server I host at home. I've offloaded Jabber and now DNS leaving SMTP and HTTP. SMTP is almost ready to go already as there's only one personal domain for one user using that and that user may retire the domain otherwise we can move to Google Apps along with the other email. And that will leave HTTP which, since I can get shared hosting for less than $2 a month, is an easy one to offload. Not free, but shutting off Alia, even as an energy efficient system (low-power CPU and everything), will save just over $2 / mo in electricity consumption.
So we're coming to the end of an era. It really goes to show just how greatly improved hosted services are today and also the breadth of features you can get from consumer products for home. To have all the trappings of a full network that is so easy to use and so cheap, it is really amazing.
Subscribe to:
Posts (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...