> Hi all.
>
> Maybe this seems the already-asked question about Squid sizing. I had a look
> at the faq, searched the list archives (and google) but I did not find a
> satisfying answer.
> I have some experience as a Linux admin, with some Squid installations, but
> only for small sites. Now I was asked to propose a squid based solution for
> url filtering of the web traffic of 12.000 users. The speed of the Internet
> connection is 54Mbit/sec. Unfortunately at the moment I am not given the
> amount of http requests per second, but I suppose that web surfing is not
> the main business of these users, so they are not going to use all that
> bandwidth with http.
> I was asked if all this can be done with just a cluster made of 2 machines
> for availability (which would be appreciated, since tha main point seems to
> be url filtering, not necessarily to save bandwidth), or if it is mandatory
> to implement a cache hierarchy.
> I thought about some scenarios. In the worst one I assumed I need 400 Gbyte
> of storage for cache and about 10 Gbyte of RAM. I would like know if it is
> possibile (and safe) to run such a Squid machine. In particular, I wonder if
> I'm going to run out of file descryptors, available TCP ports, or there are
> other constraints I should think about. Or if maybe I should better consider
> splitting the load on a set of different machines.
>
- The amount of storage needed, should roughly equal the total amount
off traffic generated by this community for one week.
This also puts a requirement on the needed phys mem. the box
should have (see FAQ).
- On average usage 12.000 users could lead to a 300reqs/sec range , on
average, which is rather high-end.
I would advise a low-end server with highest cpu-Ghz available.
In that case I would probably use 2 , with load balancing.
M.
Received on Sun Jan 29 2006 - 08:13:46 MST
This archive was generated by hypermail pre-2.1.9 : Wed Feb 01 2006 - 12:00:02 MST