A few things I like to capture;
- workloads
- all memory hits
- all misses (i.e., cache deny all)
- response sizes 0-64k bodies
- behaviour under overload
- behaviour with 0-25,000 concurrent idle connections
- latency added to hits / misses (as a histogram or std dev; NOT an
average)
- failed requests
- overhead added by:
- acls (various forms)
- redirectors
- logging (various forms)
On 28/04/2008, at 7:17 PM, Amos Jeffries wrote:
> I've been giving the benchmarking a little more though over the last
> few
> days.
>
> The proposal from way back was to build a library of benchmarking
> results
> IIRC. We have some responses from people, but I have not seen anything
> like organisation or presentation of it.
>
> We need to publish some basic benchmarking configurations. Adrian has
> spoken before of his specific configs to stress certain parts of
> squid.
> Things like that which we can say to people "do this and let us know
> the
> results"
>
> When we have those benchmarking instructions ready we are going to
> need
> some feedback on the results. What details are needed to make useful
> graphs/tables etc of the results?
> - which benchmarking config was used
> - squid version
> - OS + Kernel
> - CPU speed
> - RAM
> - HDD speed + type?
> - NIC speed
> anything else?
> and what metrics should things be measured in? req per sec per CPU-
> MHz by OS?
>
>
> Amos
>
-- Mark Nottingham mnot@yahoo-inc.comReceived on Tue Apr 29 2008 - 17:38:57 MDT
This archive was generated by hypermail 2.2.0 : Wed Apr 30 2008 - 12:00:07 MDT