Subject: Re: [vserver] Guest on GlusterFS - /var/run/utmp: File too large
From: Gordan Bobic <gordan@bobich.net>
Date: Wed, 03 Oct 2012 10:53:08 +0100

On 10/03/2012 10:37 AM, Oliver Welter wrote:
> Hi Gordan,
>
> On 03.10.2012 11:13, Gordan Bobic wrote:
>> On 10/03/2012 09:54 AM, Oliver Welter wrote:
>>> Hi Herbert
>>>
>>> On 01.10.2012 17:39, Herbert Poetzl wrote:
>>>> On Mon, Oct 01, 2012 at 01:43:19PM +0200, Oliver Welter wrote:
>>>>> Hi,
>>>>> I am experimenting with vserver and glusterfs - I guess that this is a
>>>>> glusterfs issue but perhaps somebody here has an idea.
>>>>> I am creating my root filesystem from a read-only root and an overlay
>>>>> filesystem using aufs. This is working fine with normal
>>>>> blockdevices. I
>>>>> now tried to replace the writeable partition using a glusterfs mount
>>>>> point, which results in
>>>>> fakerunlevel: open("/var/run/utmp"): File too large
>>>> sounds like the file is too large :)
>>>>
>>>> check the file sizes on all involved filesystems and
>>>> compare them to the maximum allowed file size
>>> Well - the utmp file remaining on the disk after abort is 384 bytes and
>>> I have no problems to create some larger files so I guess there is some
>>> issue with locking or the like. As said, I assume thats not really
>>> related to vserver itself...
>>
>> It sounds like you have bumped into one of the many issues with
>> GlusterFS that arise when you try to use it as a rootfs. You may want
>> to have a read through the glusterfs-devel mailing list archives for
>> issues like this that I came across when I was using it for similar
>> things (e.g. Open Shared Root). The particularly common claim among
>> the developers at the time was that "they couldn't reproduce it" - in
>> reality they just couldn't be bothered to put together a few VMs to
>> test the kind of a setup required to run GLFS as rootfs. From what
>> I've seen about GLFS, not much has changed since a few years back
>> other than version number inflation and a RH badge.
>>
>> Do bear in mind that GLFS is extremely susceptible to split-braining
>> if you are hoping to do anything interesting like sharing rootfs-es.
>> This is borne out of it's lack of having any notion of quorum and node
>> fencing.
>>
> Hm, not nice - but: I did some more tests an having a single glusterfs
> as the root partition seems to work (well, the server starts and runs
> for 5 mins), I get the above errors only if I use glusterfs as part of
> aufs, which results in that kind ofg "sandwich-fs":
>
> sda1            sda2
> glusterfs
>     (rw)   aufs  (ro)
>      vserver root"partition"
>
> I dont plan any concurrent use, I am currently running the same setup
> using drbd to mirror the rw-portion to a second box for failover, so the
> only thing that matters is that writes are commited "immediately" to the
> second node, which is the case according to the docs.

Fuse is going to cause very non-trivial performance issues if you use it 
for rootfs. It's workable, but the performance degradation is around an 
order of magnitude.

You may find you are better off using lsyncd to carry out asynchronous 
mirroring between two vserver hosts. Simple, performant, and with much 
fewer odd edge cases that can cause weird problems.

Gordan