Hassan is correct.
We currently pulled all three of our production servers offline because the last we can afford is to run out of FDs which causes serious issues to the users.
We run an actual Squid count of around 1500 users on all three but some of those are NATed addresses of around 100 or more users each so it gives us around 5000 total.
During peak usage two weeks ago when we pulled them out of the stream we were hitting the limit of 32,768 and it promptly hosed the clients.
I will try the link and any and all suggestions and report back.
The good thing is that when Squid works, it works well and we were saving about 20-40mbit/second of bandwidth by caching content and video (using Videocache). Management really wants it back in production but not with this problem.
Thx
Steve
-----Original Message-----
From: Nyamul Hassan [mailto:mnhassan_at_usa.net]
Sent: Thursday, May 06, 2010 4:15 PM
To: Squid Users
Subject: Re: [squid-users] Increasing File Descriptors
He needs more FDs because this single box is handling 5000 users over
a 400mbps connection. We run around 2,000 users on generic hardware,
and have seen FDs as high as 20k.
We use CentOS 5 and the following guide is a good place to increase
the FD limit:
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
The command "cat /proc/sys/fs/file-max" shows how many maximum FDs
your OS can handle.
After you've made sure that your OS is doing your desired FD limit,
please re-run Squid. Squid shows how many FDs it is configured for in
its "General Runtime Information" (mgr:info in cli) from the CacheMgr
interface. If this still shows lower than the OS limit you just saw
earlier, then you might need to recompile Squid with the
'--with-maxfd=<your-desired-fdmax>' flag set during "./configure"
As a side note, if you are using Squid as a forward proxy, you might
have better results with Squid 2.7x.
Regards
HASSAN
On Fri, May 7, 2010 at 00:53, George Herbert <george.herbert_at_gmail.com> wrote:
>
> Do this:
>
> ulimit -Hn
>
> If the values is 32768 that's your current kernel/sys max value and
> you're stuck.
>
> If it's more than 32768 (and my RHEL 5.3 box says 65536) then you
> should be able to increase up to that value. Unless there's an
> internal signed 16-bit int involved in FD tracking inside the Squid
> code then something curious is happening...
>
> However - I'm curious as to why you'd need that many. I've had top
> end systems with Squid clusters running with compiles of 16k file
> descriptors and only ever really used 4-5k. What are you doing that
> you need more than 32k?
>
>
> -george
>
> On Thu, May 6, 2010 at 10:32 AM, Bradley, Stephen W. Mr.
> <bradlesw_at_muohio.edu> wrote:
> > Unfortunately won't work for me above 32768.
> >
> > I have the ulimit in the startup script and that works okay but I need more the 32768.
> >
> > :-(
> >
> >
> >
> > -----Original Message-----
> > From: Ivan . [mailto:ivanhec_at_gmail.com]
> > Sent: Thursday, May 06, 2010 5:17 AM
> > To: Bradley, Stephen W. Mr.
> > Cc: squid-users_at_squid-cache.org
> > Subject: Re: [squid-users] Increasing File Descriptors
> >
> > worked for me
> >
> > http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/
> >
> > no recompile necessary
> >
> >
> > On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
> > <bradlesw_at_muohio.edu> wrote:
> >> I can't seem to get increase the number above 32768 no matter what I do.
> >>
> >> Ulimit during compile, sysctl.conf and everything else but no luck.
> >>
> >>
> >> I have about 5,000 users on a 400mbit connection.
> >>
> >> Steve
> >>
> >> RHEL5 64bit with Squid 3.1.1
> >
>
>
>
> --
> -george william herbert
> george.herbert_at_gmail.com
>
Received on Fri May 07 2010 - 12:19:04 MDT
This archive was generated by hypermail 2.2.0 : Fri May 07 2010 - 12:00:03 MDT