On Fri, Nov 10, 2000 at 10:21:15PM +0100, Martin Robbins wrote:
> I know this isn't really what you asked, but we have three server machines
> running. Below is the log from ONE for yesterday, the logs can be huge, the
> total so far today (we rotate every day: 279274787 Nov 10 22:01 access.log)
>
> We have up to around 20,000 employees using the three servers.
>
> I am not trying to play "mine's bigger than yours" but actually to
> see if anyone else has much experience with systems on this scale ?
Yes - we have. Our current setup is a cluster of Ultra 10s in front of a
load-balancing switch, each with 440MHz CPU, 768MB RAM and six 9GB 10krpm
cache drives, running Solaris 2.8 and squid 2.3.STABLE4-hno.20000819. Each
machine is handling 1.5 to 2 million requests per day. Hitrates of 65% by
requests, 35% by volume are typical. Our gzipped logfiles run to about
50MB per machine per day - enough that we've now installed a dedicated
machine for log storage and analysis.
Traffic has been climbing rapidly since the start of the new university
year, and we are starting to hit performance problems at peak times
(loading over 150,000 requests per machine per hour seem to result in a
very marked increase in the hit median service time). I can tell I'm
going to have to investigate means of getting more performance out of these
boxes - any advice would be welcome.
-- --------------- Robin Stevens <robin.stevens@oucs.ox.ac.uk> ----------------- Oxford University Computing Services http://www-astro.physics.ox.ac.uk/~rejs/ (+44)(0)1865: 726796 (home) 273212 (work) 273275 (fax) Mobile: 07776 235326 -- To unsubscribe, see http://www.squid-cache.org/mailing-lists.htmlReceived on Sat Nov 11 2000 - 05:28:29 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:56:19 MST