2000 reqs/sec? So you're supporting a 155Mbps link?
A single Squid box will not do 2000reqs/sec no matter what you do. Very
few web caches will, and certainly not from 2 IDE disks (at the
cacheoff, you'll see boxes with 16 SCSI Ultra 160 disks doing about that
rate--and they aren't running Squid).
I'll guess you mean 2000/min?
If so, that's not too hard at all even without an async i/o compile.
But you don't have enough RAM for 70GB of cache_dir space. 1GB of RAM
/might/ make it. I would actually recommend you drop the size of your
cache_dirs some, and raise the RAM some as well.
Shane T. Ferguson wrote:
> Hi,
>
> What would be the recommended amount of RAM for Squid 2.4.STABLE1 on Linux
> 2.2.18
> running transparently (WCCP) with a Cisco router. For disk drives, I am
> using 2x 40GB ATA-100 drives set up with the following in squid.conf:
>
> cache_dir ufs /cache2 35000 32 256
> cache_dir ufs /cache3 35000 32 256
> cache_mem 32 MB
> memory_pools off
> maximum_object_size 10240 KB
> maximum_object_size_in_memory 1024 KB
>
> The average client load is approximately 2000 requests/second and will be
> increasing over the next month or so.
>
> The reason I ask about RAM requirements is because I currently have 768MB
> installed and an enormous amount of paging is occuring (running out of swap
> space
> fast). The problem fits the description of the FAQ section:
> 11.17 My Squid becomes very slow after it has been running for some time.
>
> I have yet to try GNU malloc. If anyone can let me know if I'm actually
> running
> this box with insufficient RAM, i'd appreciate it. One other questions is
> should a box with this load have been set up with asyn-io enabled?
> Thanks
>
> Shane T. Ferguson
-- -- Joe Cooper <joe@swelltech.com> Affordable Web Caching Proxy Appliances http://www.swelltech.comReceived on Mon Apr 23 2001 - 19:10:49 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:59:34 MST