Mark Treacy <mark@aone.com.au> writes:
> means that the entire cache is scanned for expired objects once a
> day (~20 hours). Because the cache is constantly being cleaned this enables
> the administrator to select appropriate ttl's based on the arrival rate
> and available space such that an lru gc is rarely, if ever, run.
The LRU algorithm will probably be replaced in 1.1, with a more effective
one that can continously purge objects without any (i.e. very small)
performance loss.
The slow scan will probably be removed, when IMS is used to refresh
expired objects (this requires that the LRU replacement is redesigned).
> Our ~9G cache holds 500,000 objects. With only 100 directories, this
> means that each directory holds 5,000 objects. After a week or 2 the
> directories develop holes and we had directories 100k in size. This
> makes pathname lookups rather expensive, particularly if the activity
> hits many such sized directories (flushing various kernel caches).
The idea is that the larger cache, the more cache_dir entries
you should use (each cache_dir contains 100 directories).
> Background Restore takes too long if there are no events occuring
> (50 objects read once a second), added a flag to cause squid to block
> until the restore is complete.
Not quite true. It restores 50 objects, then handles any pending
network data, restores 50 more objects and so on (no delay). The
more concurrent activity, the slower it reloads.
Slow rebuild (after a crash) is a bit trickier, since it requires
a stat() on each restored file, and therby makes a lot of disk I/O.
Because of this, the default "speed" for slow rebuilds is 5 objects.
This should be a configurable parameter, and not hardcoded as it
currently is. It should be choosen based on the speed of the cache
server (and disks) to give a acceptable performance loss during cache
rebuild.
--- Henrik NordströmReceived on Thu Jun 13 1996 - 12:31:50 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:32:30 MST