As Torsten stated, I don't think you will get any single-process daemon to
handle 900+ req/s with any grace. Still, you can tune it quite a bit if
you know the access patterns.
If the working set is smaller than RAM, take Torsten's suggestion: use the
null cache_dir type, make maximum_object_size_in_memory large, and
cache_mem large.
If the working set is slightly larger than RAM, add more RAM and follow
the above.
If the working set is small, but still much more than RAM, I have a
picky-write patch that works very well on our accelerators. (Basically,
the object isn't saved unless it gets 2 requests before memory pressure
dumps it.)
-- Brian
On Monday 12 November 2001 10:59 am, you wrote:
> Hi guys,
>
> squid still has a lot of users right now.. the IO is tremendous.. very
> high..
>
> Even though it's very select group of files that are accessed.. this is
> very weird as the box has 1gb of ram,
> and should be able to keep the entire set of files accessed in ram.. the
> AIO processes I have set for max of 350 procs..
> it now again hits a queue congestion - seemingly because of the disk-io
> not keeping up..
>
> why is squid not keeping the objects in the memory? :-(
>
> |---------+-----------------------|
> | Start | Mon, 12 Nov 2001 |
> | Time: | 15:55:29 GMT |
> |---------+-----------------------|
> | Current | Mon, 12 Nov 2001 |
> | Time: | 15:59:32 GMT |
> |---------+-----------------------|
>
> Connection information for squid:
> Number of clients accessing cache: 745
> Number of HTTP requests received: 224313
> Number of ICP messages received: 0
> Number of ICP messages sent: 0
> Number of queued ICP replies: 0
> Request failure ratio: 0.00%
> HTTP requests per minute: 55174.3
> ICP messages per minute: 0.0
> Select loop called: 33758 times, 7.226 ms avg
> Cache information for squid:
> Request Hit Ratios: 5min: 1.6%, 60min: 1.6%
> Byte Hit Ratios: 5min: 24.1%, 60min: 24.1%
> Request Memory Hit Ratios: 5min: 7.4%, 60min: 7.4%
> Request Disk Hit Ratios: 5min: 38.2%, 60min: 38.2%
> Storage Swap size: 934476 KB
> Storage Mem size: 3040 KB
> Mean Object Size: 21.88 KB
> Requests given to unlinkd: 0
> Median Service Times (seconds) 5 min 60 min:
> HTTP Requests (All): 0.04776 0.04776
> Cache Misses: 0.05046 0.05046
> Cache Hits: 0.02069 0.02069
> Near Hits: 0.07014 0.07014
> Not-Modified Replies: 0.02190 0.02190
> DNS Lookups: 0.31806 0.31806
> ICP Queries: 0.00000 0.00000
> Resource usage for squid:
> UP Time: 243.932 seconds
> CPU Time: 244.930 seconds
> CPU Usage: 100.41%
> CPU Usage, 5 minute avg: 100.40%
> CPU Usage, 60 minute avg: 100.40%
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 358
> Memory usage for squid via mallinfo():
> Total space in arena: 13224 KB
> Ordinary blocks: 13184 KB 1674 blks
> Small blocks: 0 KB 0 blks
> Holding blocks: 404 KB 2 blks
> Free Small blocks: 0 KB
> Free Ordinary blocks: 39 KB
> Total in use: 13588 KB 103%
> Total free: 39 KB 0%
> Memory accounted for:
> Total accounted: -1 KB
> memPoolAlloc calls: 29572746
> memPoolFree calls: 29478662
> File descriptor usage for squid:
> Maximum number of file descriptors: 1024
> Largest file desc currently in use: 412
> Number of file desc currently in use: 367
> Files queued for open: 1
> Available number of file descriptors: 656
> Reserved number of file descriptors: 100
> Store Disk files open: -70
> Internal Data Structures:
> 42807 StoreEntries
> 261 StoreEntries with MemObjects
> 160 Hot Object Cache Items
> 42709 on-disk objects
>
>
> [root]# vmstat 1 100
> procs memory swap io system
> cpu
> r b w swpd free buff cache si so bi bo in cs us
> sy id
> 1 0 1 4344 241360 73008 581044 0 0 0 0 18347 218 35
> 23 41
> 1 0 1 4344 241368 73008 581036 0 0 0 0 18384 225 31
> 29 40
> 1 0 1 4344 241340 73008 581052 0 0 0 812 18344 255 30
> 28 41
> 1 0 1 4344 241256 73008 581052 0 0 0 0 18152 347 30
> 27 43
> 0 0 1 4344 241180 73008 581084 0 0 16 0 18222 245 38
> 20 43
> 1 0 0 4344 241044 73008 581084 0 0 0 0 18212 232 38
> 20 42
> 1 0 1 4344 241040 73008 581088 0 0 0 0 18245 368 30
> 27 43
> 1 0 1 4344 241040 73008 581088 0 0 0 588 18232 282 34
> 25 41
> 1 0 0 4344 241028 73008 581088 0 0 0 0 18134 398 37
> 24 40
> 1 0 1 4344 241020 73008 581096 0 0 0 0 18322 327 33
> 25 42
> 1 0 1 4344 240972 73008 581108 0 0 0 0 18465 306 38
> 20 42
> 2 0 1 4344 240972 73008 581108 0 0 0 0 18028 400 32
> 26 41
> 1 0 1 4344 240972 73008 581108 0 0 0 780 18207 342 34
> 24 42
> 2 0 1 4344 240968 73008 581112 0 0 0 0 18000 428 37
> 21 42
> 1 0 1 4344 240960 73008 581120 0 0 0 0 18059 335 40
> 20 40
> 1 0 1 4344 240908 73008 581132 0 0 0 0 18016 388 33
> 27 40
> 1 0 1 4344 240888 73008 581136 0 0 0 0 18250 407 36
> 22 42
> 5 0 0 4344 240852 73008 581136 0 0 0 740 18119 397 39
> 21 40
> 1 0 1 4344 240844 73008 581136 0 0 0 0 18253 199 40
> 17 43
> 1 0 0 4344 240840 73008 581136 0 0 0 0 18227 251 33
> 25 42
> 1 0 1 4344 240848 73008 581136 0 0 0 0 18177 239 31
> 29 40
> 1 0 1 4344 240848 73008 581136 0 0 0 0 18388 181 30
> 29 41
> 1 0 1 4344 240848 73008 581136 0 0 0 224 18418 245 35
> 28 38
> 0 0 1 4344 240844 73008 581140 0 0 0 0 18467 199 35
> 22 42
> 1 0 1 4344 240772 73008 581172 0 0 16 0 18367 320 38
> 21 41
> 1 0 1 4344 240704 73008 581188 0 0 0 0 18411 283 35
> 21 43
> 1 0 1 4344 240704 73008 581188 0 0 0 0 18438 247 32
> 27 41
> 1 0 1 4344 240700 73008 581188 0 0 0 592 18438 363 38
> 20 41
> 1 0 1 4344 240700 73008 581188 0 0 0 0 18139 422 37
> 21 42
> 1 0 1 4344 240696 73008 581188 0 0 0 0 18265 325 35
> 23 41
>
> -------------| This mail has been sent to you by: |------------
> Klavs Klavsen, IT-coordinator and Systems Administrator at
> Metropol Online - http://www.metropol.dk
> Tlf. 33752700, Fax 33752720, Email ktk@metropol.dk
>
> Private- Email klavs@klavsen.net - http://www.vsen.dk
>
> --------------------[ I believe that... ]-----------------------
> It is a myth that people resist change. People resist what other
> people make them do, not what they themselves choose to do...
> That's why companies that innovate successfully year after year
> seek their peopl's ideas, let them initiate new projects and
> encourage more experiments. -- Rosabeth Moss Kanter
Received on Mon Nov 12 2001 - 19:40:20 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:04:03 MST