On Thu, 26 Nov 1998, Bruce Campbell wrote:
> worst case is peer retrieves digest, we rebuild digest immediately after
> (repeat ad nauseum) ... so the peer is eternally nearly one hour out of
> sync with our cache-digest.
Don't forget that if that happens, the peer will immediately ask for a new
digest and synchronization will be established. Peers ask digests not after
an hour of having an old copy, but when that old digest expires.
> (should rephrase that - if the peer 'misses' fetching a cache-digest due
> to the link between us and the peer being disconnected, then tries to
> fetch something based on data contained in an old cache-digest when
> connectivity returns.. etc)
Peers do not use stale digests. If a peer cannot fetch a digest, it disables
all digest lookups till a valid fresh digest is fetched.
> Ah, that bit of squid's behaviour I wasn't aware of. Will the peer also
> disable using cache-digests until it has the latest and greatest copy
> after it detects disconnection/reconnection of a (cache-digest) peer?
Peer disables a digest if and only if it failed to fetch a valid digest.
There are other (digest unrelated) conditions that may prevent Squid from
using a peer though.
> Huh? Wouldn't it cause less false hits as its reducing the number of
> probable objects which may be false hits due to us tossing them out during
> the lifetime of the next cache-digest (which the peer may not get in time)
> ?
Yes, I misread your suggestion, sorry. Certainly, being more conservative
will decrease the amount of [false] hits.
Alex.
Received on Thu Nov 26 1998 - 00:27:54 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:43:22 MST