Hello,
Thanks a lot for all the fast and informative replies.
As for the configuration upgrade, what about this?
- P4 2.0Ghz or whatever fastest they make today :-)
- a lot of memory (does going over 1Gb make sense for a ~30Gb cache?)
- more disks, like 2 x 2 disks, each pair on its SCSI controller
-> comments on the kind of SCSI controller? is Adaptec OK or can we get
better performance for a comparable cost?
- make a filesystem on each pair of disks on the same SCSI chain
(using plain concatenation, not RAID-0)
- let Squid balance over the two fs (Reiser with suitable options)
-> I haven't got any comments about the LAN adapters. Anything better
than Intel Etherexpress or 3C905 that you'd advise for a high-traffic
cache? (still on 100baseT). Last time I have checked, the two boxes were
experiencing *really* high interrupt rates.
I still have a few comments and questions:
> - dual CPU probably won't get you anything as squid is single-threaded
How is it then that I see a bunch of Squid processes (or would they
really be threads) when I do a "ps auxww" on the Linux box?
# ps auxww|grep squid
(...)
squid 7743 38.3 62.9 883424 566564 ? R Sep25 15878:24 squid -NsY
squid 7744 0.0 0.0 1168 280 ? S Sep25 0:00 (unlinkd)
squid 7748 0.0 62.9 883424 566564 ? S Sep25 0:05 squid -NsY
squid 7749 0.0 62.9 883424 566564 ? S Sep25 17:49 squid -NsY
squid 7750 0.0 62.9 883424 566564 ? S Sep25 17:40 squid -NsY
squid 7752 0.0 62.9 883424 566564 ? S Sep25 17:41 squid -NsY
squid 7753 0.0 62.9 883424 566564 ? S Sep25 17:41 squid -NsY
squid 7754 0.0 62.9 883424 566564 ? S Sep25 17:41 squid -NsY
squid 7755 0.0 62.9 883424 566564 ? S Sep25 17:48 squid -NsY
(many more)
> - reiserfs is better
>
> Go have a look at swelltech.com. They list their configurations and relate
> them to the upstream bandwidth. (I have absolutely no ties to Swelltech).
>
Thanks again, will go see.
Aaron Roberts wrote:
``Basically, the memory usage goes up steadily as the cache grows. You can see
+above, that a 56Gb cache requires a total of around 435MB RAM. I have also
+found that different servers often show different characteristics - I always
+setup a cronjob to send an email report of the disk usage, memory and CPU usage
+once per day. Once the memory used by the Squid processes reaches about 90 -
+95% of available RAM, you will see a massive drop in performance, and 100% CPU
+usage! Once a server gets close to this point, I use the cache size 'high' and
+'low water' marks in squid.conf to keep the cache size just right to keep
+Squid's memory usage around 85%''
One one of the boxes with 1Gb memory, I see (top output):
11:49am up 114 days, 16:35, 2 users, load average: 6.97, 8.03, 7.88
76 processes: 72 sleeping, 4 running, 0 zombie, 0 stopped
CPU states: 34.5% user, 62.0% system, 0.0% nice, 3.4% idle
Mem: 900660K av, 896256K used, 4404K free, 0K shrd, 64584K buff
Swap: 1028152K av, 394016K used, 634136K free 186876K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
7743 squid 18 0 772M 555M 154M R 160 56.8 63.1 15881m squid
Does this qualify as the situation described above? 772M VM used on a box that
has 1G of physical memory doesn't seem to be "memory starving" to me. Actually
I'm not sure I understand why the RSS gets trimmed to 555M (not anything else
running on this machine).
Configuration:
cache_mem 50 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 16384 KB
cache_replacement_policy heap GDSF
cache_dir aufs /home/squid/cache 30720 256 256
Greets,
_Alain_
Received on Wed Oct 23 2002 - 23:01:57 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:10:53 MST