Hello,
I have two problems with a Linux Squid machine (Squid 2.5STABLE7, Red Hat
Enterprise Linux ES release 3 (Taroon Update 1))
Problem 1: Filedescriptors.
I have reconfigured Squid to use 16384 file descriptors (it's a fairly
busy proxy, highest peak I've seen so far was around 7000 descriptors, and
I expect the peak will increase in the future), following the description
in the Squid book (edit limits.h, ulimit, /proc entry). This works fine
when I start squid as root. However, as user squid I can't raise the limit
above 1024. I've added the following lines to /etc/security/limits.conf:
squid hard nofile 16384
squid soft nofile 16384
But this doesn't seem to work. Do I need to restart the machine for this
to be effective? (something I rather don't want to do, since my customer
has very strict rules about downtime)
Problem 2: Parent problems
The Squid proxy has a single peer, a Radware loadbalancer which
distributes its load to about 15 Finjan content scanners. I see in the
cache.log that about 3 or 4 times a second, the loadbalancer can't be
reached, while it's up normally. (TCP connection to XXX.XXX.XXX.XXX/8080
failed). I have no idea where to start looking for a cause of this. Could
it be that the network stack of the Linux machine needs some tweaking, to
allow a large number of sessions to the same IP address (mostly in
TIME_WAIT status)? How can I find out what's going wrong?
Anyone ideas?
Joost
Received on Mon Dec 13 2004 - 02:48:15 MST
This archive was generated by hypermail pre-2.1.9 : Sat Jan 01 2005 - 12:00:02 MST