Hi,
We're running squid 3.1.16 on solaris on a sparc box.
We're running it against an ICAP server, and were testing some scenarios when ICAP server went down, how squid would handle it. After freezing the ICAP server, squid seemed to have big problems.
Once it was back up again, it kept on sending OPTION requests to the server - but squid itself became completely unresponsive. It wouldn't accept any further requests, you couldn't use squidclient against it or doing a squid reconfigure, and was not responding to 'squid -k shutdown', so had to be manually killed with a 'kill -9'.
We then restarted the squid instance, and it started to go crazy, file descriptors reaching the limit (4096 - previously it never went above 1k during long stability test runs), and a load of 'Queue Congestion' errors in the logs. Tried to restart it again, and it seemed to behave better then, but still the number of file descriptors is very big (above 3k).
Has this been seen before? Should we try to clear the cache & restart again? Any other maintenance that could be done?
Thanks and regards,
Justin
This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp
Received on Sun Nov 06 2011 - 08:16:13 MST
This archive was generated by hypermail 2.2.0 : Sun Nov 06 2011 - 12:00:02 MST