Subject: Re: [vserver] Design questions for large cluster deployment with many guests and shared iSCSI storage
From: Eugen Leitl <eugen@leitl.org>
Date: Tue, 20 Jan 2009 15:10:55 +0100

On Tue, Jan 20, 2009 at 08:40:18AM -0500, John A. Sullivan III wrote:
> Hello, all.  We are in the midst of designing infrastructure for a new
> business launch and VServer is an integral part of the design.  In our
> naive optimism, we are planning for upwards of ten thousand VServer
> guests running on a cluster of VServer hosts.

I don't know whether this is useful to you, but I was thinking
about using PVFS2 on a 4-SATA 10 krpm drive (at least one hot 
spare, if you're running more than 10-20 of these, since otherwise
the failure rate will be too high) 1U AMD quadcore box running 
~100 guests each. That would require ~2 full racks at your 
usage pattern, at some 60-70 kEUR estimated price. The actual 
requirements might be just one rack at half the price, of course, 
depending on how much resources your average guests will take.

You can use fatter servers, but that will mean buffered DIMMs,
and you still would run into I/O bottleneck. A really fat box with an
InfiniBand SAN could be also an option (lots of eggs in one quite
expensive basket). Cheap iSCSI things do only 120-200 MByte/s (though
plenty of transactions, if you fill them up with 15 krpm SAS drives), 
and typically can't even do channel-bonding.

Notice this is mostly cat /dev/ass, so caveat lector.
 
> We have been greatly aided in our hardware design by Pogo Linux
> (http://www.pogolinux.com).  I'll shamelessly plug them as they have
> been incredibly helpful and competent. They were one of the most
> expensive bidders and have been well worth every extra penny.  They have
> succeeded in affordably moving us to an iSCSI SAN for our initial
> deployment - something we did not plan to do until at least a year from
> now - thus some design issues have appeared earlier than expected.
> 
> The problem is the use of shared block device storage in a SAN as
> opposed to a shared file system in a NAS.  The original NFS design was
> for a file system something like this:
> 
> /data
>     /client1
>         /CorporateData
>         /ExecutiveData
>         /Users
>             /Graham
>             /John
>             /Paul
>     /client2
>         /HR
>         /Users
>             /Peter
>             /James
> 
> A John's guest would have mounted NFS1:/data/client1 to John:/data and
> found his home directory at /data/users/John.  Graham's guest would have
> mounted NFS1:/data/client1 to Graham:/data and found his home directory
> at /data/users/Graham.
> 
> I don't believe I can do that safely with a block device.  It would be a
> catastrophe of file system corruption as far as I know (and I don't know
> very far!).
> 
> My thought was to not do the mounts in the guests.  Rather, we would do
> the mount in the underlying VServer cluster, i.e., we mount the iSCSI
> device containing all the clients' data at /data.  We then do a mount
> bind from the underlying VServer host file system to the guests' file
> systems: mount --bind /data/client1 /vservers/john/data; mount
> --bind /data/client1 /vservers/graham/data.  Would this solve our
> problem? Are there any caveats? Is there a better approach? Issues of
> security, performance, data consistency? Thanks - John
> -- 
> John A. Sullivan III
> Open Source Development Corporation
> +1 207-985-7880
> jsullivan@opensourcedevel.com
> 
> http://www.spiritualoutreach.com
> Making Christianity intelligible to secular society
-- 
Eugen* Leitl <a href="leitlhttp://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE