Subject: Re: [vserver] Possible Hashify Corruption
From: Gordan Bobic <gordan@bobich.net>
Date: Mon, 18 Oct 2010 08:29:45 +0100

Herbert Poetzl wrote:
>>>> Can anybody hazard a guess as to what happened here? I'm prepared to
>>>> consider any theory at the moment, no matter how far fetched.
> 
>>>> I'm running 2.6.30.10-vs2.3.0.36.14-pre8. The file system is ext4
>>>> without journal and in data=writeback mode.
> 
>>> Lets go with your first guess, file corruption, and speculate a bit...
> 
>>> We know that ext4 gets its speed by the high degree of meta-data and
>>> data catching that it uses.
>>> We know that if ext4 is not cleanly shut down, your file system is
>>> burnt toast.
>>> On any type of system.
> 
>> That is, in my experience, superstition. I have a number of laptops with 
>> SSDs where I don't want the write overheads of journalling with the 
>> exact same setup, and none have ever had any file corruption issues. 
>> Sure, sometimes after yanking the battery the files that the open for 
>> writing get broken and fsck puts their fragments in lost+found, but 
>> that's no worse than ext2 has been before it.
> 
> putting superstition aside, can you recreate the issue?
> i.e. is there a script or procedure which reliably
> produces the 'corruption'?

Not yet, I haven't had a chance to rebuild the VMs. I'll do that again 
from scratch and re-hashify to see if it happens again.

And superstition is pretty much where I'm at with this at the moment...

>>> Now, can we relate those behaviors to a single file system name space?
> 
>>> Or, first, was it limited to a single file system name space?
> 
>> Yes - there is only one partition, only one file system (the root one).
> 
> it is not a good idea to put Linux-VServer guests on
> the same filesystem as the host (system). having at
> least one partition (shared between all the guests)
> is strongly advised.

Can you point me at the documentation that explains why this is a good 
idea specifically for vservers?

>>> Was the guest you where running and changing file content on the __only__
>>> one that may have had changed files?
> 
>> Both guests are toast in exactly the same way. The host's binaries are 
>> fine and the host boots OK. The guests were running fine for days, with 
>> many guest reboots in the meantime. Things appear to have gone wrong 
>> when the host was shut down. That _might_ imply that things were running 
>> fine from the caches pre-filled some time before, but it seems really 
>> strange that ALL binaries would be hosed, even the ones that were never 
>> touched. The only thing that would have touched them all that I can 
>> think of is hashify.
> 
> what do those 'corrupted' binaries contain?

Good question. They identify as straight "binary data" using "file". I 
haven't had a chance to analyze them with a hex editor yet. But they are 
definitely not valid executables.

>>> That one is a slim chance, the host context is writing to /var/log/* if
>>> nothing else - any of those get corrupted?
> 
>> My /var/log is on tmpfs in both the host and the guests (I'm on a SSD 
>> and don't need the logs so I don't want them wasting my write cycles).
> 
>>> Where there other running guests on the system, with changed /
>>> changing files that did not get corrupted?
> 
>> There are only two guests on the system, and they were both running.
> 
>>> Did you shut down just this one guest or the entire machine?
> 
>> First the guests individually, then the host machine. Clean shutdowns.
> 
>>> Are you using tagging on this file system?
> 
>> Tagging? What do you mean?
> 
> tagging as in 'tag' as mount option (which is
> intentionally really hard to set on a single root
> partition :)

OK, I think you just answered my previous question there. :)
I am not planning to use disk quotas/limits, so I never bothered with this.

Gordan