I'm running with a similar config on an Ultra 10. When I originally
installed 2.4Stable1 I had some sort of memory leak which caused my memory
usage to increase over time (usually by a large amount at one time, then
it would be okay for a while).
I upgraded to one of the 2.4 daily tarballs and reconfigured with the
dlmalloc:
# ./configure --enable-storeio=diskd,ufs --enable-underscores
--enable-dlmalloc --enable-xmalloc-statistics
I'm using diskd, not AUFS, but since then I haven't had any memory
problems.
-Mike
On Tue, 9 Apr 2002, orko wrote:
> Hey
>
> Just a quick query. It seems that squid (slowly) consumes more and more
> swap, to the point where it needs to be restarted to get the memory
> back. I'm not sure exactly what the time frame of this is, but roughly a
> fortnight or less. At the moment, which is approx. 10hrs after a
> restart, it looks like:
>
> PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
> 250 squid 100 30 0 652M 648M cpu/0 285:26 38.51% squid
>
> which seems OK, but before the restart, it looked like:
>
> PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
> 18615 squid 100 60 0 2453M 1174M sleep 162.4H 20.00% squid
>
> The machine is an Ultra 250 running squid2.4s4 under Solaris 8.
> 2GB memory, A1000 disk array.
> cache_mem is 60MB, and has 6 cache_dir's configured as:
> cache_dir aufs /cache/00 10000 16 256
> It's currently handling ~50 client req/s.
>
> (need more info?)
>
> could it just be the spool is too large for the available ram?
>
> gracias
>
>
Received on Tue Apr 09 2002 - 07:15:10 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:07:30 MST