>
> There is a webserver/database construction behind the squid where I am not
> root on and they have a too much load problem on their database. I provide
> several proxies in front of their webserver and because they are not able
> to restrict the amount of clients I thought to do that with squid but I
> don't have feedback about the load of their crap.
> Well its not a random denial of service setup, It is first come first
> served for X clients for Y seconds each. Every client more than X will be
> asked to try it again later.
> An Idea how to do that? I tried already iptables to count the sessions but
> I don't know how to implement this...
>
> Thanks, b52
>
- Do you mean SQUID is intended to act as an accelerator for
this webserver ?
One way to configure an accelerator , is to define it as a cache_peer
in squid.conf , and use :
never_direct allow all
(too).
When defining a 'cache_peer' you have an option :
max_conn
which limits the max. number of connections, SQUID is allowed to
launch to the peer, in this case your webservice(database-webserver).
Perhaps this could help. It would be more friendly, seen from the end user
point. How it trenches the user-side load, is not immediately clear
for me at the moment.
Cache.log, observing and access.log, would be first items to follow
up too I guess.
M.
Received on Wed Apr 12 2006 - 04:41:16 MDT
This archive was generated by hypermail pre-2.1.9 : Mon May 01 2006 - 12:00:02 MDT