|
| Since my Squid is 310 Mb, I still miss 50 Mb :-} The various data
structures used by Squid and displayed at the end of the "info" output
| seem to be large enough to explain it. If I understand well, our only
| choice is to drastically reduce cache_mem or to buy more memory?
|
Unfortunately it seems inevitable that squid will show fragmentation
(and an apparent memory leak) given the current crop of malloc
implementations. I don't see anyone out there volunteering to write a
portable _efficient_ garbage collecting malloc, which is what it currently
needs. Failing that, period restarts reduce process sizes. However, it
may be worth checking whether your named has died (for whatever reason).
It happened here a couple of months back with much the same problems of
mysterious slowdown, but apparent functioning. It's not something you
pick up on quickly since normally everything stops when named declines
to function.
(and reducing cache_mem in the 1.0.x squids makes things worse, btw -
once you move into delete behind mode for everything your CPU load goes
up quite a lot along with malloc fragmentation and so on. You can either
reduce your load eg by timing out slow clients quickly, or reduce your cache
swap so you become stable again)
Brian
--- Brian Denehy, Internet: B-Denehy@adfa.oz.au IT Services MHSnet: B-Denehy@cc.adfa.oz.au Australian Defence Force Academy UUCP:!uunet!munnari.oz.au!cc.adfa.oz.au!bvd Northcott Dr. Campbell ACT Australia 2600 +61 6 268 8141 +61 6 268 8150 (Fax)Received on Sat Oct 19 1996 - 03:23:43 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:18 MST