Hi
> With larger caches is there any advantage to adjusting the number of 1st and
> 2nd level directories configured with the cache_dir directive i.e. increased
> speed.
There is. I don't have any specific values on "when it's faster"... as far
as I am aware there is also a tradeoff eventually.
In short:
If you have a large number of files in a directory, the function that reads
the directory to locate your file will be slow (it's a linear search).
The whole purpose of the multiple-directories is to make sure that each
directory has a small number of files... how many files per directory you
have depends on your OS, I guess.
If you have too many directories, Squid could spend all of it's time
reading directory listings (with 1000 L1 directories each containing 10000
subdirectories you hit the same limits WRT to having to read the listings
linearly).
The more directories you have the more memory your OS has to spend caching
the directory inodes too...
So:
change the values a little - don't go overboard. It's more likely that one
of the following will be more useful:
1) mount the filesystem with 'no_atime' (that's linux, but most OS's have
something similar). This will stop the OS keeping a record of when
each file was last accessed (which causes one disk update per
request)
2) Add more ram. You are almost certain to find that the cache is actually
hitting disk for directory lookups, and that the cpu is sitting
idle. If this is the case then changing directory layer levels
is not going to be useful.
Oskar
--- "Haven't slept at all. I don't see why people insist on sleeping. You feel so much better if you don't. And how can anyone want to lose a minute - a single minute of being alive?" -- Think TwiceReceived on Mon Jan 18 1999 - 15:48:47 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:04 MST