Subject: Re: [vserver] hashify and memory saving
From: Tor Rune Skoglund <trs@swi.no>
Date: Mon, 4 Jan 2016 01:51:17 +0100

Den 01. jan. 2016 22:25, skrev Herbert Poetzl:
> On Fri, Jan 01, 2016 at 09:37:52PM +0100, Tor Rune Skoglund wrote:
>> Having been a happy linux-vserver user for more than 10 years
>> now, it was about time to test the hashify feature. The disk
>> savings are obvious, and easily measured, but I have been
>> trying a lot harder to measure any possible run-time memory
>> savings.
>> For the testing, I created a simple template LAMP guest, and
>> a lot of hashified guests cloned from that one. I am unable
>> to measure noticeably less memory usage when running multiple
>> hashified guests compared to non-hashified ones using free and
>> /proc/meminfo/'s MemAvailable entry.
>> However, this could very well be to shortcomings in my own
>> understanding how this should work or what to look for.
>> What should I look for regarding possible memory savings?
>> Anyone with any pointers?
> You won't see any memory savings with dynamic memory allocations
> and you won't get any benefits on read-write mappings either,
> but you should be able to see a reduction for read only mappings
> like they happen when using static binaries or read only mapped
> shared libraries as well as read only memory mapped data files.

Thank you, Herbert. Although reading a lot lately and trying my best to 
get a grip on how this works, I am still a newbie in this area. So 
please excuse me for continuing to ask possibly stupid questions.... ;)

As far as I can tell, all code and libraries are by default PIC now on 
my setup. (Is this a requirement?)

Does your comment above then mean that all read only mappings can be 
shared across guests no matter their setting of the execute flag and the 
MAP_SHARED/MAP_PRIVATE flag?
(In my test setup, based on greping /proc/*/maps for "r--p" and "r--s", 
there are very few shared read only mapped files ("r--s") compared to 
read only private ("r--p"). It seems like almost every binary or .so has 
a considerable read-only private section which then will be part of the 
assumed memory savings.)

If not, what should I look for --- e.g. using /proc/<pid>/maps, pmap or 
in some other way ?

How does KSM ( https://en.wikipedia.org/wiki/Kernel_same-page_merging ) 
play with linux-vserver? If at all?

Lastly, I am sorry if I am jumping to wrong conclusions somewhere 
here... Please feel free to brutally educate me. :)

BR,
Tor Rune Skoglund, trs@swi.no

> If I would devise a test to show the advantages, I would run a
> binary which doesn't do many dynamic allocations but uses a lot
> of code and/or libraries and run it as only process in each guest
> with a few thousand guests in parallell, once with and without
> unification in place.
> Best,
> Herbert
>
>> This is Gentoo, util-vserver 0.30.216_pre3120, kernel Linux amd64
>> 3.18.7-vs2.3.7.4.
>> BR, Tor Rune Skoglund
>> trs@swi.no
>>