Subject: Re: [vserver] Hashify and exclude
From: Ed W <lists@wildgooses.com>
Date: Tue, 28 Oct 2008 09:14:20 +0000
Tue, 28 Oct 2008 09:14:20 +0000
Michael S. Zick wrote:
> On Mon October 27 2008, Ed W wrote:
>   
>>>> What is the argument against unifying everything? 
>>>>     
>>>>         
>>> on recente kernels (i.e. in the presence of
>>> CoW Link Breaking), the only case where a
>>> unified file might not work is when some
>>> application checks the file properties and
>>> requires them to have a link count of 1
>>>  
>>>       
>> Subtle extra one, but as I understand it, the COW stuff breaks ALL
>> hardlinks on write, so if you have a bunch of hard links within the
>> vserver for some internal process, then additionally hardlinking those
>> *across* vservers will cause all the hardlinks (even within the vserver)
>> to be broken if the file is ever altered.  This may or may not be a
>> problem in general (seems like a corner case for most people?)
>>
>>     
>
> It does not work that way here (2.6.27.4+vs2.3)
>
> I am using CoW to detect changes to files in replicated development trees -
> Given a file (inode) with 8 (upto 4000+ in my experience) different appearances
> in the directory tree - 
>
> Changing it creates one, total of (2) inodes, one with a link count of 7 and 
> the changed one with a link count of 1.
>   


Check again. 

For example on my distro (gentoo), large numbers of my timezone files 
are just hardlinks of each other.  So imagine file TZ which is hard 
linked 8 times within a single install and then cross hardlinked across 
20 vservers.  When I change that one file on a single vserver I expect 
it become a hardlink with a count of 8 (again) and the other vservers to 
all be hardlinked to each other still (link count 19x8). 

What you are describing is that ALL the hardlinks get broken for that 
one file.

Hypothetically in my example I would update one timezone file and that 
would leave all the rest of them in my vserver pointing at the old value 
which is shared across the other vservers.  This might be a surprise if 
I was expecting all my local hardlinks to allow me to update one file 
and see several files change...

I have tentatively examined the hashify code to get it to ignore files 
with a link count >1 and +iunlink/+immutable set.  Seems to be workble?  
Thoughts?

Ed W


Michael S. Zick wrote:
On Mon October 27 2008, Ed W wrote:
  
What is the argument against unifying everything? 
    
        
on recente kernels (i.e. in the presence of
CoW Link Breaking), the only case where a
unified file might not work is when some
application checks the file properties and
requires them to have a link count of 1
 
      
Subtle extra one, but as I understand it, the COW stuff breaks ALL
hardlinks on write, so if you have a bunch of hard links within the
vserver for some internal process, then additionally hardlinking those
*across* vservers will cause all the hardlinks (even within the vserver)
to be broken if the file is ever altered.  This may or may not be a
problem in general (seems like a corner case for most people?)

    

It does not work that way here (2.6.27.4+vs2.3)

I am using CoW to detect changes to files in replicated development trees -
Given a file (inode) with 8 (upto 4000+ in my experience) different appearances
in the directory tree - 

Changing it creates one, total of (2) inodes, one with a link count of 7 and 
the changed one with a link count of 1.
  


Check again. 

For example on my distro (gentoo), large numbers of my timezone files are just hardlinks of each other.  So imagine file TZ which is hard linked 8 times within a single install and then cross hardlinked across 20 vservers.  When I change that one file on a single vserver I expect it become a hardlink with a count of 8 (again) and the other vservers to all be hardlinked to each other still (link count 19x8). 

What you are describing is that ALL the hardlinks get broken for that one file.

Hypothetically in my example I would update one timezone file and that would leave all the rest of them in my vserver pointing at the old value which is shared across the other vservers.  This might be a surprise if I was expecting all my local hardlinks to allow me to update one file and see several files change...

I have tentatively examined the hashify code to get it to ignore files with a link count >1 and +iunlink/+immutable set.  Seems to be workble?  Thoughts?

Ed W