Subject: Re: [vserver] hard scheduler, was Re: [vserver] Disk limits problem?
From: ADNET Ghislain <gadnet@aqueos.com>
Date: Fri, 11 Sep 2009 22:41:34 +0200
Fri, 11 Sep 2009 22:41:34 +0200
hi Corey ,
 
> correct, cgroups cpu scheduling guarantees a lower-bound (eg this guest
> will always be guaranteed a minimum of 10% of the cpu), much like hard
> limit + idle time, where hard limit alone creates an upper-bound (you are
> only ever guaranteed a maximum of 10% of the cpu).
>
>   
 
but you can control the idle time distribution as with the scheduler, in 
cgroup you do not control this you just control the global load  
balancing. Just has to match your requirement.
 
>   
>
> ah, i guess as a detailed engineer, one who reads the fine print, i always
> try to know what my minimum quality of service is (or at least should be)
> and i'm thankfully when it's above that and not disappointed when it's at
> that.
>
>   
 
you are a rare thing in customer land as most just want the most and if 
they get more than they pay they just never let the bonus go without a 
fight beleive me.
 

> my set-up is non-commercial/personal, so i've never encountered those
> "customer" problems.
>   
 
yes your use will make the difference here some need uper bound, some 
lower bound, some both.
the beauty is to match the tools with the needs.
 
> yes, a hack to implement hard limits with cgroups is to run cpuburn in
> every guest so that there is no idleness to be shared (but technically
> cpuburn is a little overkill because it's not just meant to busy the cpu,
> but utilize it in the most heat-producing way, exercising functions of the
> cpu known to generate the most heat; the difference between someone reading
> a book or running up and down a flight of stairs: both consumes their time,
> but one generates a lot more heat).
>   
 
see the heat as added bonus for the winter to come. This was more a joke 
than a practical advice, perhaps we could start an open project cpubuzz 
to take all the cpu in a way it do not generate too much heat :)
 
> i might be misinterpreting your analogy, but i see vserver's token bucket
> scheduler more like token ring and the kernel scheduler more like ethernet
> (and the cgroup scheduler somewhere imbetween the two in guaranteed
> behavior) and the kernel scheduler has won out due to being "good enough"
> for the average case (just like ethernet degraded horribly under high
> loads, but had faster speeds & higher throughput than token ring under
> "normal" use, so it won out).
>   
i am bad at analogy it seems, so anyone makes his own idea i am not 
enough fluent in the kernel internal to go deeper in analogy, i might 
say thing that are more what i think it is than what it is (where only 
kernel/vservers gurus can speak  i am only a basic end user) :)  For the 
fight, the kernel cgroup do not win there was no fight at all it seems 
vserver is just plain ignored but there again i do not know a dam thing 
about vserver and mainline relations if any so perhaps there was a fight.

-- 
Cordialement,
Ghislain



["application/x-pkcs7-signature" not shown]