I have a cluster of 3 2.4 Stable squids running in accelerator mode
with a single class 2 delaypool on each. The traffic through this
setup will soon increase by several orders of magnitude,
and as we're concerned about the stability of the delaypools code
I've been monitoring it fairly closely using the cachemgr stats
and others to help us see exactly what is going on. [script I use
to pull down the stats for rrdtool attached if anyone's interested].
Anyway, my question is about the cachemgr memory usage metric:
The behaviour I've noticed is on an rrdtool graph at:
http://newcastle.adm.onet.pl:8080/
As you can see on the first graph, there are 3 squid servers each
with their own delaypool on 3 different machines. Each delaypool
progressively allocates buckets up to 255. The delaypool
then throws away all the buckets, deallocates the memory and
starts again. [see bottom graph of ps -o sz - each step down
in memory coincides with reaching 255 and restarting]
Seems a bit weird but I'm sure there's a good reason (?)
The overall memory usage seems to be fairly constant
and very lightweight/not leaky so I'm not complaining.
What I'm wondering, is why doesn't the cachemgr parameter
for delay pools memory vary? As you can see on the second
graph it just stays constant. Clearly from the ps output graph
this isn't really the case ? - it throws away 2k after reaching
255 buckets everytime. - what exactly is this memory usage
stat measuring? This algorithm also makes it fairly tricky to determine
which (ie how many) buckets are actually in use - I just guess
by saying the ones that are < individual max are likely to be in
use - is there a better way??
cheers,
Mike
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:36 MST