Got it resolved!
cat /proc/sys/fs/file-max showed that I could go as high as 3,138,830 FDs.
I changed the compile options to --with-maxfd=128000 and recompiled and installed it.
I changed the line in my /etc/init.d/squid script to ulimit -HSn 128000 and restarted.
I thought I had tried all this before but evidently not.
If it almost held the load at 32,768 then at 128,000 I should have enough head room to keep us safe, for now.
Thanks to all who responded.
steve
-----Original Message-----
From: Nyamul Hassan [mailto:mnhassan_at_usa.net]
Sent: Thursday, May 06, 2010 4:15 PM
To: Squid Users
Subject: Re: [squid-users] Increasing File Descriptors
He needs more FDs because this single box is handling 5000 users over
a 400mbps connection. We run around 2,000 users on generic hardware,
and have seen FDs as high as 20k.
We use CentOS 5 and the following guide is a good place to increase
the FD limit:
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
The command "cat /proc/sys/fs/file-max" shows how many maximum FDs
your OS can handle.
After you've made sure that your OS is doing your desired FD limit,
please re-run Squid. Squid shows how many FDs it is configured for in
its "General Runtime Information" (mgr:info in cli) from the CacheMgr
interface. If this still shows lower than the OS limit you just saw
earlier, then you might need to recompile Squid with the
'--with-maxfd=<your-desired-fdmax>' flag set during "./configure"
As a side note, if you are using Squid as a forward proxy, you might
have better results with Squid 2.7x.
Regards
HASSAN
On Fri, May 7, 2010 at 00:53, George Herbert <george.herbert_at_gmail.com> wrote:
>
> Do this:
>
> ulimit -Hn
>
> If the values is 32768 that's your current kernel/sys max value and
> you're stuck.
>
> If it's more than 32768 (and my RHEL 5.3 box says 65536) then you
> should be able to increase up to that value. Unless there's an
> internal signed 16-bit int involved in FD tracking inside the Squid
> code then something curious is happening...
>
> However - I'm curious as to why you'd need that many. I've had top
> end systems with Squid clusters running with compiles of 16k file
> descriptors and only ever really used 4-5k. What are you doing that
> you need more than 32k?
>
>
> -george
>
> On Thu, May 6, 2010 at 10:32 AM, Bradley, Stephen W. Mr.
> <bradlesw_at_muohio.edu> wrote:
> > Unfortunately won't work for me above 32768.
> >
> > I have the ulimit in the startup script and that works okay but I need more the 32768.
> >
> > :-(
> >
> >
> >
> > -----Original Message-----
> > From: Ivan . [mailto:ivanhec_at_gmail.com]
> > Sent: Thursday, May 06, 2010 5:17 AM
> > To: Bradley, Stephen W. Mr.
> > Cc: squid-users_at_squid-cache.org
> > Subject: Re: [squid-users] Increasing File Descriptors
> >
> > worked for me
> >
> > http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/
> >
> > no recompile necessary
> >
> >
> > On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
> > <bradlesw_at_muohio.edu> wrote:
> >> I can't seem to get increase the number above 32768 no matter what I do.
> >>
> >> Ulimit during compile, sysctl.conf and everything else but no luck.
> >>
> >>
> >> I have about 5,000 users on a 400mbit connection.
> >>
> >> Steve
> >>
> >> RHEL5 64bit with Squid 3.1.1
> >
>
>
>
> --
> -george william herbert
> george.herbert_at_gmail.com
>
Received on Fri May 07 2010 - 13:02:37 MDT
This archive was generated by hypermail 2.2.0 : Sun May 09 2010 - 12:00:04 MDT