Squid runs as a single process, so it won't use that second processor
unless you run two of them. Our Celeron 366 systems do 10Mbit without a
problem -- 15 with some prodding. Therefore, my guess would be no.
If you have enough disk space on each end, I would suggest running an
rsync server on the stats server and have the web servers upload their
logs during a slow period.
As for cutting FDs, add vhost to the log format and log all of the sites
together. You would lose per-site log files moving to squid, anyway.
-- Brian
On Saturday 03 November 2001 04:05 am, Sagi Brody wrote:
> Hello,
> I'm looking to use squid to ease the logging on my web servers.
> However, I've never used squid before and would like know if this is
> possible and how much of a load it will present. Currently, I'm NFS
> mounting all my webservers to a stats server which parses the log files.
> Because NFS connections are not as stable as I'd like them to be, I'm
> looking to put a transparent machine running Squid infront of the
> webserver, and have it do all the logging locally. This would reduce the
> nfs connections being made and create less room for error. It would also
> save me 2 or 3 FDs per site on the servers which also seems apealing.
>
> My question is what sort of memory and CPU load would squid present upon
> this transparent server? I'm thinking of using a dual p3 800 with a 1GB
> of ram and a few scsi HDs. Would this be enough to handle the logging of
> say 100Mbps of traffic? I'm NOT looking to do any caching at all. Any
> help is greatly appreciated.
>
> Thanks,
>
> Sagi Brody
Received on Sat Nov 03 2001 - 22:34:23 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:03:52 MST