On 02/21/2010 05:44 PM, Robert Collins wrote:
> On Sun, 2010-02-21 at 22:27 +0100, Henrik Nordström wrote:
>> lör 2010-02-20 klockan 18:25 -0700 skrev Alex Rousskov:
>>
>>> The reasons you mention seem like a good justification for this option
>>> official existence. I do not quite get the "fork bomb" analogy because
>>> we are not creating more than a configured number of concurrent forks,
>>> are we? We may create processes at a high rate but there is nothing
>>> exploding here, is there?
>> With our large in-memory cache index even two concurrent forks is kind
>> of exploding on a large server. Consider for example the not unrealistic
>> case of a 8GB cache index.. I actually have some clients with such
>> indexes.
>
> I have an idea about this.
>
> Consider a 'spawn_helper'.
>
> The spawn helper would be started up early, before index parsing. Never
> killed and never started again. It would have, oh, several hundred K
> footprint, at most.
>
> command protocol for it would be pretty similar to the SHM disk IO
> helper, but for processes. Something like:
> squid->helper: spawn stderrfd argv(escaped/encoded to be line & NULLZ
> string safe)
> helper->squid: pid, stdinfd, stdoutfd
>
> This would permit several interesting things:
> - starting helpers would no longer need massive VM overhead
> - we won't need to worry about vfork, at least for a while
> - starting helpers can be really async from squid core processing (at
> the moment everything gets synchronised)
Sounds like a very good idea to me.
I have a feeling this "helperd" daemon should become one of the
processes in the SMP architecture. It can be forked together with squidN
processes right after parsing squid.conf, restarted using the same
waitchild code, and use the same IPC mechanisms to communicate with the
others (including descriptor passing).
Cheers,
Alex.
Received on Mon Feb 22 2010 - 03:00:39 MST
This archive was generated by hypermail 2.2.0 : Mon Feb 22 2010 - 12:00:07 MST