Ronald wrote:
> Thanks joe. I understand that now. But still our Squid is using 270MB(out of
> 512) memory and only thing cpu usage was high even it reaches 100% and
> System crashes. How can use effectively the memory and cpu?
Squid does the best it can with the architecture it has. Squid keeps an
in core index of every object in your cache. If it is using too much
memory (270MB is about what the Squid processes on our 512MB boxes run
at, actually, so I don't consider that too much memory) you can either
lower the size of your cache_dirs or increase the amount of memory in
your machine.
I don't see Squid crashing at 100% CPU utilization--though I don't push
it more than about 30% beyond the request rate at which it hits 100%.
In fact, I'm running a polygraph benchmark right now on a Squid at 0%
idle, and it has been running for 30 minutes or more. Are you using a
known-good Squid for async i/o? 2.3 is not stable when compiled async
i/o, while a recent 2.4 daily snapshot is. 2.2STABLE5+hno (or Henrik's
mara branch from SourceForge) is also very solid in an async i/o
compile. And are you running it on a modern Linux (preferable) or
Solaris (probably works, but I don't know that Henrik has tuned for it)
version?
> Even I am using good processor PIII 833Mhz.
Is it fast enough for your load, though? 833MHz with 512MB in a well
tuned system with 2 or 3 10K Ultra 160 disks will provide about
12-16Mbits throughput (roughly 120-160 reqs/sec). If you are supporting
higher loads than that, then you need faster hardware. Otherwise, you
need to tune your system.
> My doubt is why Squid was more cpu hungry in async I/O mode? Do you feel is
> there any thing need to be improved in the async I/O implementation in the
> Squid code level?
Async i/o takes more CPU because it should. It allows you to use more
of your CPU because it doesn't have to wait for disk accesses to
complete like the single threaded Squid does. And, there is some
additional overhead (both memory and CPU) when using a threaded Squid.
It's just the nature of multi-threaded code.
Yes, there are a lot of things that could be improved in Squid, wrt
memory and CPU efficiency. Start reading the devel archives for the
past several months, and continue to follow the current discussion of
Squid-3 ideas, to get a good idea of what. It doesn't gain much to
rehash those discussions here. Suffice to say there is no quick-fix to
make Squid more efficient...Henrik has done a lot of very solid work on
the 2.4 async i/o code, and it works really well given the other
bottlenecks in Squid. Other than that, you'll have to just dig in and
get your hands dirty (start programming) if you want to see Squid get
better sooner.
Good luck.
> Again Thanks Joe.
> Ronald
>
>
>>Lower the request rate that Polygraph is producing. Your box is being
>>overloaded (just as the error says).
>>
>>Balu wrote:
>>
>>
>>>Hi List,
>>>
>>>I have compiled squid with asyn I/O with default
>>>thread.When I tested this using polymix3 workload I am
>>>getting the following error in top2 phase and test
>>>fails. How can I over come this problem.
>>>
>>>
>>>2001/07/19 17:23:57| 0 Swapfile clashes
>>>avoided.
>>>2001/07/19 17:23:57| Took 5.7 seconds (9667.7
>>>objects/sec).
>>>2001/07/19 17:23:57| Beginning Validation Procedure
>>>2001/07/19 17:23:57| Completed Validation Procedure
>>>2001/07/19 17:23:57| Validated 55577 Entries
>>>2001/07/19 17:23:57| store_swap_size = 767984k
>>>2001/07/19 17:23:57| storeLateRelease: released 0
>>>objects
>>>2001/07/19 22:36:18| aio_queue_request: WARNING - Disk
>>>I/O overloading
>>>2001/07/19 22:50:14| aio_queue_request: WARNING - Disk
>>>I/O overloading
>>>2001/07/19 23:12:15| aio_queue_request: WARNING - Disk
>>>I/O overloading
>>>
>>>
>>>Can can any body explain how asyn I/O works.
>>>
>>>Regards,
>>>-Balu.
--
Joe Cooper <joe@swelltech.com>
Affordable Web Caching Proxy Appliances
http://www.swelltech.com
Received on Fri Jul 20 2001 - 03:58:24 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:17 MST