It took awhile to get a new version in production.
File descriptor usage for squid:
Maximum number of file descriptors: 4096
Largest file desc currently in use: 1390
Number of file desc currently in use: 978
Files queued for open: 0
Available number of file descriptors: 3118
Reserved number of file descriptors: 100
Store Disk files open: 916
This is squid-2.4-200108182300. I'd say it still leaks.
This particular system is Debian woody with a 9gig and an 18gig disk, and
it serves as our multimedia and image parent cache.
Squid was compiled on Debian 2.2 (potato) w/gcc 2.95.2 using
./configure --disable-ident-lookups --disable-unlinkd \
--with-aio-threads=20 --with-pthreads --enable-storeio='ufs,aufs'\
--enable-removal-policies='lru,heap'
The problem does not occur with diskd.
From here, I'm kind of lost. I'm not sure how to proceed with hunting
this problem down. Any suggestions?
-- Brian
On Wednesday 25 July 2001 02:01 am, Adrian Chadd wrote:
> On Tue, Jul 24, 2001, Brian wrote:
> > The error means the IO queue has built up to an unusual level. With
> > hard drives, the drive can fall behind and leave all of the threads
> > blocked for too long. In a case like this, a flash of load is
> > probably building up the queue for a moment. More threads would allow
> > more requests in progress.
> >
> > Our squid httpd-accels produced several congestion and overloading
> > messages an hour without harm (well... actually it seems to leak file
> > descriptors around the overloading point). I finally tweaked it so
> > requests wouldn't be saved to disk until squid got at least 2 requests
> > for it.
>
> Leaky fds? Can you reproduce it with the latest squid-2.4 snapshot?
> that might make it easier for us squid hackers to track down any bug
> and squish it.
Received on Tue Aug 21 2001 - 16:18:51 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:52 MST