Hi
> We're running a stable 1.1.22 on a secondary proxy server but is possibly
> having some peformance issues.
> We're running 106Gb of cache and 1Gb of RAM on this server.
Fun ;)
> but then agian, we're getting nearly 20 - 25 million hits a day (when the
> winds blowing the right direction and its not raining) on a good day.
Those TCP or UDP?
If that's TCP it means that you are probably peaking at about 2000
http requests a second... I don't know about that....
> On the local networks aorund the admin offices or operations centre, we can
> download files and pages with no problems or obvious performance issue.
Small files or large files? Large files may be happy, but you might
find that the cache is adding latency to small files: and that's what
is making the clients unhappy.
> any one have any tips or similar setup which may be providing good results?
> We dont use the swap so its not thrashing and the cpu doesnt reach above
> 0.20.
If you run 'ps auwwwwwx' and look at the 'STAT' field for Squid, what
is the value? Try run it 10 times, 15 times, 20, whatever. If you find
that it's mostly in 'D', then your problem is disk latency. You will
probably find that the system is spending most of it's time in 'open'
or 'close'. This problem is fixed by upgrading to Squid-2. Squid-2
(when you compile in async-io with './configure --enable-async-io')
uses threads to handle these functions, which means that a slow 'open'
request will not slow down the entire cache.
I used to run a cache a LOT less loaded than yours: I found that 70%
of the time Squid was waiting for open() to complete. It was also
spending about 12% of it's time waiting for close().
Note that adding more disks isn't really the solution here: Squid
NEEDS to work the right way. It's an OS (filesystem) limitation: not a
disk limitation.
By upgrading from Squid 1 to Squid 2, I managed to cut down latency by
a factor of 10.
Since you have so much ram, upgrading to kernel 2.2 will allow you to
cache more disk entries with the dcache: this can also help speed up
your opens and closes. I presume that it also makes better use of the
memory available for caching.
If the cache server is doing mostly ICP queries, you can fiddle with
the "heavy voodoo here" stuff in the config file. This could help you
lots. If your users are complaining because the ICP response is slow,
you most likely need to fiddle with these anyway.
Things to try:
First: get squid-2. Use the async-io stuff. Use glibc and
linuxthreads-0.8 (or whatever). Don't use the other threads library
(with a version number like 1.60), since they are green-threads: I
don't think that these will help.
Solaris x86 with their journalling filesystem. Put the journal on a
disk of it's own: this will reduce open/close latencies dramatically.
Apologies to the person (you know who you are) that I didn't believe,
(if they are reading this.)
FreeBSD with softupdates: may work as well as Solaris, but I have had
no reports of good/bad performance changes.
Oskar
Received on Mon Mar 29 1999 - 14:33:15 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:45:35 MST