Subject: Re: [vserver] util-vserver & cpuset bug ?
From: Ed W <lists@wildgooses.com>
Date: Thu, 16 Jul 2009 13:09:34 +0100
Thu, 16 Jul 2009 13:09:34 +0100

> IMHO, it is much more convenient to have them running
> on private IPs, and to S/DNAT them to 'real' (i.e.
> public) IPs whenever needed 
>
> (this is a lot more flexible than using the public IP
> directly)
>   

Actually this could benefit from being more prominent in the 
documentation - I have kind of learned this after I started down the 
road of making everything use public IPs and haven't gone back to use 
private IPs, but at least on the surface it appears to be a MUCH more 
sensible solution and as a side benefit you force your firewalling to be 
more correct

You obviously inherit all the problems of NAT firewalls, but for most 
services this is irrelevant or easily worked around.  I would perhaps 
worry a little about a high use webserver/mailserver chewing through a 
lot of mem/cpu managing the connections, but have not even tried it, so 
perhaps this is an ungrounded fear?


>> At least for my data center machines I obviously need 
>> to talk to real IPs at some point and obviously one IP 
>> is enough if use an array of ports, but remembering how 
>> they all map is a pain.                    
>>     
>
> not necessarily for sshd. as you are depicting a scenario
> where you have lots of guests but only a single (or a few)
> public IPs available, I presume they are usually not
> running sshd, and thus, they are unreachable from the 
> outside, except via the host ... so in this scenario, it
> would be logical to access the guests via ssh _from_ the
> host, which doesn't need any sshd bound to a public IP
>   

Hmm, yes, I can see this would work well, although...


>> I guess it would be possible to setup a gateway (vserver?) 
>> which you ssh into and then from there into the real server. 
>>     
>
> the host could be that gateway ...
>   

..this strikes me that you then have lots of people logging onto the 
host and for better security we want to minimise logins to the host, 
hence the suggestion to use a vserver as the gateway?  (Obviously 
vserver enter implies we are already using the host a bunch, but I was 
kind of thinking ahead...)

However, this still gives you double ssh logins to get any work done and 
whilst it can be scripted a bit, it can cause a bit of extra work for 
any scripts (again vserver enter is a double login - I'm thinking here 
more about the general procedure to get remote access to a bunch of 
machines behind a gateway)


Thanks for your explanations - much appreciated

Ed W



IMHO, it is much more convenient to have them running
on private IPs, and to S/DNAT them to 'real' (i.e.
public) IPs whenever needed 

(this is a lot more flexible than using the public IP
directly)
  

Actually this could benefit from being more prominent in the documentation - I have kind of learned this after I started down the road of making everything use public IPs and haven't gone back to use private IPs, but at least on the surface it appears to be a MUCH more sensible solution and as a side benefit you force your firewalling to be more correct

You obviously inherit all the problems of NAT firewalls, but for most services this is irrelevant or easily worked around.  I would perhaps worry a little about a high use webserver/mailserver chewing through a lot of mem/cpu managing the connections, but have not even tried it, so perhaps this is an ungrounded fear?


At least for my data center machines I obviously need 
to talk to real IPs at some point and obviously one IP 
is enough if use an array of ports, but remembering how 
they all map is a pain.                    
    

not necessarily for sshd. as you are depicting a scenario
where you have lots of guests but only a single (or a few)
public IPs available, I presume they are usually not
running sshd, and thus, they are unreachable from the 
outside, except via the host ... so in this scenario, it
would be logical to access the guests via ssh _from_ the
host, which doesn't need any sshd bound to a public IP
  

Hmm, yes, I can see this would work well, although...


I guess it would be possible to setup a gateway (vserver?) 
which you ssh into and then from there into the real server. 
    

the host could be that gateway ...
  

..this strikes me that you then have lots of people logging onto the host and for better security we want to minimise logins to the host, hence the suggestion to use a vserver as the gateway?  (Obviously vserver enter implies we are already using the host a bunch, but I was kind of thinking ahead...)

However, this still gives you double ssh logins to get any work done and whilst it can be scripted a bit, it can cause a bit of extra work for any scripts (again vserver enter is a double login - I'm thinking here more about the general procedure to get remote access to a bunch of machines behind a gateway)


Thanks for your explanations - much appreciated

Ed W