Hi
I am struggling with the following error: comm_open: socket failure: (24)
Too many open files
This happens after squid has been running for many hours. I have a Xeon
server with 12 cores, 64Gb Ram and 8 x 1Tb disks. The first two are in a
RAID-1, and the balance are managed as aufs caches.
The system is running 64-bit Ubuntu 12.04 and squid 3.3.6 compiled from
source.
I am running transparent proxy from two Cisco 7600 routers using wccp2. The
purpose is to proxy international bandwidth ( 3 x 155Mbps links).
To handle the load I have 6 workers, each allocated its own physical disk
(noatime).
I have set "ulimit -Sn 16384" and "ulimit -Hn 16384", by setting
/etc/security/limits.conf as follows:
# - Increase file descriptor limits for Squid
* soft nofile 16384
* hard nofile 16384
The squid is set to run as user "squid". If I login as root, then "su
squid", the ulimits are set correctly. For root, however, the ulimits keep
reverting to 1024.
squidclient mgr:info gives:
Maximum number of file descriptors: 98304
Largest file desc currently in use: 18824
Number of file desc currently in use: 1974
Each worker should have 16K file descriptors - hence the 98K total
I have seen the Largest file descriptor currently in use get up to 26K
after some while.
max_filedescriptors is not set in squid.conf
In spite of the above settings, I eventually get the "Too many open files"
messages in the cache.log, and performance deteriorates badly until the load
is reduced.
I have now set /etc/security/limits.conf as follows:
# - Increase file descriptor limits for Squid
* soft nofile 65536
* hard nofile 65536
root soft nofile 65536
root hard nofile 65536
Now both root and squid users have ulimits of 65536,
Is there anything else I could be doing to prevent this error?
Peter
Received on Wed Jul 24 2013 - 20:11:16 MDT
This archive was generated by hypermail 2.2.0 : Sun Jul 28 2013 - 12:00:05 MDT