Subject: Re: [vserver] Poll: High (ish) availability - how are you doing it?
From: Gordan Bobic <gordan@bobich.net>
Date: Tue, 27 Jul 2010 15:32:08 +0100

On Tue, 2010-07-27 at 14:52 +0100, Ed W wrote:

> At present I'm really just looking to achieve a goal of a backup machine 
> which has a warm backup of our data and will be periodically (or better) 
> synced with the main machine.  Considering rsync or DRBD for this role 
> (bit scared of DRBD though).

There is nothing scary about DRBD, I've been using it in production on
many a cluster for years. :)

But if rsync is good enough for you, maybe lsyncd
(http://code.google.com/p/lsyncd/)
would serve your needs?

The other things you should consider are, in no particular order:

- Are you planning to run it in active-active or active-passive mode?
You can do both with DRBD (active-active would require a cluster file
system on top, e.g. GFS/GFS2 or OCFS/OCFS2). But if you do that, you
could have a shared-root SSI sort of a setup.

- If you are running it across geographically diverse areas, you will
likely find yourself solving a difficult problem, and that is that if
you lose connectivity between sites, you will lose the ability to
implement fencing in case of an outage, which will prevent failover from
taking place in tightly clustered environment. Plus, WAN ping times will
render most proper cluster file systems unworkable.

- You have another option in terms of file replication - GlusterFS. You
may find it less scary than DRBD+GFS, but be warned, it is slower, and
still suffers from latency issues over a WAN.

It all depends on how loose you are willing to play. lsyncd will come
with minimal overheads, but be aware it only rsyncs on close(), so it
may not end up 100% up to date. But probably not much worse than running
rsync once/hour or so.

If you need 100% up to date correctness, you can go with DRBD in
active-passive mode, but you'll still need some cluster service (e.g.
RHCS or heartbeat) to handle the failover, and that will still require
fencing to work.

> I would be interested in any systems 
> people have experienced that do an async replication between two 
> machines, eg some process which uses inotify/dazuko/redirfs to watch for 
> changes and then an async queue to merge changes to the second machine - 
> the goal being to have a near realtime replica, but without the 
> performance penalty associated with cluster filesystems?

It sounds like you are looking for lsyncd.

> We would like to have a plan to move up from there to a hot failover 
> option involving multiple machines in the same data center and should a 
> machine fail then we can move the services over to the new machine 
> (target under 10 mins to failover).  However, there seem to be 
> significant issues in automating this and quite a bit of thought seems 
> to be needed to fencing dead machines.  Anyone got any experience to share?

Fencing isn't an issue. You just need to make sure each machine can be
reliably fenced via UPS/PDU, IPMI or whatever other means you have
available. Writing fencing agents isn't particularly hard, I have
written some in the past. If you have a managed switch, you could, for
example, use port disabling for fencing.

> A more satisfactory solution seems to be active-active clustering.  This 
> is now irrelevant to linux-vserver, but does anyone here have any 
> experience of running active-active clusters of mailservers, webservers, 
> etc?

Yes. Be advised, however, that for most workloads, the performance will
be _lower_ than in a pure fail-over scenario, unless you carefully
design your application stack's file system access patterns (e.g. no two
nodes access the same directory).

> I see a GFS question on the list so we have at least on GFS user, 
> what is currently the preferred choice for cluster filesystems for low 
> cost active-active setups?  NFS, Ceph, Gluster, XtreemFs, DRBD, GFS2, 
> OCFS all seem to have pros and conns?

You seem to be scraping the bottom of the barrel here, some of the ones
you list aren't even applicable, and they are not all equivalent.

DRBD is a block device, you'd need to run a cluster file system like GFS
or OCFS on top of it.

NFS only really applies here if you are planning to hang your cluster
from an NFS NAS, which would be a single point of failure unless this
NAS internally implements mirroring to another one like it elsewhere.

GlusterFS has a lot going for it conceptually, such as no need for a
proper cluster infrastructure and fencing devices, or a proprietary file
system (it lives on top of an xattr capable conventional file system).
Bear in mind, however that being fuse based, it is slower, and you may
find there are some edge cases where you need to thump things to make
them work (have a google around for my posts about using GlusterFS for
Open Shared Root). I have been using this setup for a while now on one
of my clusters, with GlusterFS based root file system, but that may be a
step too far for most people.

> My dream situation would be an active-active mailserver cluster with a 
> machine in the USA and a machine in Europe.  Users would reach the 
> geographically closest.

Considering your ping time across the atlantic will be 100ms+, I doubt
that any proper clustered solution would yield acceptable performance
(imagine running without disk caches of any sort and disks that have
seek time of 100ms, and you won't be far off). And even if that weren't
the problem, you'd still have fencing issues since fencing must be
reliable, or the whole cluster will lock up.

GlusterFS sort of tries to work around the fencing requirement, by
turning a blind eye to split-braining on the basis that it's file rather
than block device based, so instead of risking losing the whole file
system to split-brain corruption, you will only use individual files,
but that is unlikely to be good enough for any critical system.

In terms of geographical load balancing, you could do this with
redirects. Get lists of IP addresses for countries in question, and have
your web server issue a redirect from www.example.com to
www.us.example.com or www.eu.example.com accordingly.

For mail services this would be harder. For SMTP you could selectively
accept or reject connections, I suppose, on multiple MX-es, e.g.

MX 10 mail.us.example.com
MX 10 mail.eu.example.com

and set up iptables (or better, userspace filtering) rules on both of
the machines so that if the address is "local", they accept, but if it
should go to the other server, to reject the packet. Not ideal, but it
should work.

> This seems like an unsolved problem at present 
> and there don't seem to be any cluster filesystems with satisfactory 
> performance over long RTT links to achieve this?

Sadly - no. There is no theoretical way you can achieve fast locking
over slow links. With something like GFS, you could potentially get
things working acceptably if you organize your workload in such a way
that different nodes don't access same directories, thus avoiding lock
contention. But you'll still suffer the 100ms ping time penalty on
writes since DRBD would have to commit the data to the remote node
before returning success. And then you still have to figure out how you
are going to handle fencing if a link goes down, since without fencing,
the cluster file systems will just stop until they can reliably fence
the disconnected node.

Gordan