Fri, 21 Jan 2011 11:32:04 -0600 On Fri January 21 2011, Furgerot Julien wrote: > Sorry for these obscures questions out of any context. > It's a proprietary software developped by my firm. > Hmm... Guess that means it is unlikely that I designed and/or wrote it. ;-) > This software can > manage multiple VoIP calls simultaneously, one per multicast address. > Still don't see your problem, why not 256 vserver contexts, one per each of 256 individual multicast address. > It's why we need to bind to several, dynamicaly assigned multicast IP > addresses, not only one per guest. > So go ahead and assign it, only one of the 256 vserver contexts will handle the processing - nearly identical to having only one thread handle the processing. > With your help, we find a workaround to addresses assignment by > creating one interface by IP. > Great, now do that another 255 times. Mike > But now, we would like to force > "any addr" (0.0.0.0) binding to a choosen IP/interface, because a > component in our software we don't manage (java JMX) bound to 0.0.0.0. > In my experiments with vservers, only the first started vserver can > bound to any addr, so others can't bind to the same port. > > Maybe have I miss something with vserver configuration ? > > Sincerely, > Julien Furgerot > > On Fri, Jan 21, 2011 at 5:47 PM, Michael S. Zick <mszick@morethan.org> wrote: > > On Fri January 21 2011, Furgerot Julien wrote: > >> Thank you for this reply. > >> > >> However, recall, that my software is a VoIP application which could > >> use different (a range of) multicast adresses during its lifecycle. > >> These addresses are allocated on demand by another software. Thus, > >> each instance is configured to be potentially linked to one of these > >> adresses. Furthermore, one can have many simultaneous VoIP > >> communications where each one uses one given multicast address. Except > >> if there is a solution to resolve this multiple multicast adresses > >> bindings, I can't see how this could be handled. > >> > >> What do you think ? > >> > > > > Still can not see your problem in your description above. > > Does a single, VoIP call use multiple addresses during its lifetime? > > > > I would think not. Once the call is put up, it will use whatever > > address it was assigned until the call is torn down. > > > > Or, at least that was the way they used to work. > > > > Do you mean by "my software" something you invented yourself? > > Or do you mean "the software I am using"? What software? > > > > Mike > > > >> > >> Julien > >> > >> On Fri, Jan 21, 2011 at 3:02 PM, Michael S. Zick <mszick@morethan.org> wrote: > >> > On Fri January 21 2011, Furgerot Julien wrote: > >> >> On Fri, Jan 14, 2011 at 5:47 PM, Herbert Poetzl <herbert@13thfloor.at> wrote: > >> >> > services binding to 0.0.0.0 inside a Linux-VServer guest > >> >> > will be automagically limited to the assigned IP addresses, > >> >> > which in turn means, if you assign different IP addresses > >> >> > to different guests, they will live happily side by side > >> >> > even if the services inside the guests bind to 0.0.0.0 > >> >> > >> >> You are right, I have tested when the VM is bound to one IP address > >> >> and it works fine ! > >> >> > >> >> However, in my configuration each VServer is bound to many IP > >> >> addresses in order to be able to receive/send from/to many multicast > >> >> addresses that are allocated on demand. Thus, I was wondering whether > >> >> it is any hint so that to restrict sockets on 0.0.0.0 to be bound to > >> >> only one of these associated IP addresses ? Is there any patch that > >> >> can overcome this problem ? > >> >> > >> > > >> > Why not just run a vserver per multicast address? > >> > > >> > Your whatever-it-is application is probably running an instance > >> > per multicast address anyway (perhaps as a thread). > >> > > >> > If you "hashify" the on-disk files, you'll only have a single > >> > copy of those files (on-disk and in-memory) - > >> > So even running a few hundred context-per-address vservers would > >> > probably not be all that resource intensive. > >> > > >> > Mike > >> >> Again, thank you for all, > >> >> > >> >> Julien > >> >> > >> >> > >> > > >> > > >> > > >> > >> > > > > > > > >