On Thu, Nov 08, 2007, Dave Raven wrote:
> Hi Adrian,
> I've got diskd configured to be used for objects over 500k - the
> datacomm run is all 13K objects so essentially it's doing nothing.
> Interestingly though I see the same stuff if I use ufs only, or just diskd.
Ok.
> I am using kqueue - I will try to get you stats on what that shows. If I
> push it too far (1800 RPS) I can see squid visibly failing - error messages,
> too much drive load etc. But at 1200RPS it runs fine for > 10 minutes - I'd
> really like to get this solved as I think there is potential for a lot of
> performance.
>
> I've just run a test now at 300RPS and it failed after 80 minutes -- very
> weird...
Well, firstly rule out the disk subsystem. Configure a null cache_dir and say
128mb RAM. Run Squid and see if it falls over.
There's plenty of reasons the disk subsystem may be slow, especially if the
hardware chipsets are commodity in any way. But Squid won't get you more than
about 80-120 req/sec out of commodity hard disks, perhaps even less if you start
trying to use modern enormous disks.
Adrian
-- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -Received on Thu Nov 08 2007 - 11:01:56 MST
This archive was generated by hypermail pre-2.1.9 : Sat Dec 01 2007 - 12:00:02 MST