Hi all,
Ive looked abit around the faqs but didnt quite find all the answers I need.
My situation:
I am thinking of setting up squid for caching having worked with it in the
past, but Im unclear about some sizing issue for a large installation.
We are a broadband (cable modem) based ISP with a some dialups too. My
immediate need is to service around 50 - 60 http requests per second.
This got me to the big figure of 4,320,000 per day, which I didnt not find
in the JANET artice by Martin Hamilton on the FAQ. Also with an average 8k
object size, I gather id need 32GB hard disk and 512MB ram minimum.
We currently run on a 14 Mbps Internet link, this will be up to 30Mbps by
the end of the year, so the requests per second will be increasing by quite
an amount !
Im estimating 64-80GB hard disk and 1GB RAM minimum for this.
My questions:
At what point do I need to consider clustering ? (Note the above is at one
single physical location.)
Practically speaking what is the max load seen in the field/production ?
Does squid break under such heavy load, if running on appropriate
hardware/memory ?
Is anyone aware of any hardware, kernel o/s or squid issues with memory
addressing over 1GB physical RAM
Im open to any o/s that fits the job (linux or solaris) and any sugeestions
you may have.
Thanks for your time
Regards
Mark
Received on Tue May 08 2001 - 14:00:33 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:59:52 MST