TheHub participates in peering with a handful of some other local ISPs
(Brisbane, Australia). Recently I found that my acls actually permitted
other proxies to collect misses from us, so I tightened up miss_access to
watch a lot of their customers get denied from us. Not so good. Opening
up miss_access didn't help (possibly stray characters in originally?).
After upgrading my proxy to 2.0p1, miss_access works as wanted, except now
one remote proxy is hitting it a lot, eventually found to be due to
false cache_digest_hits (ie, the url wasn't in our cache at all). Adding
'no-digest' fixed that, so everything is back to normal.
Is it possible to be nice, and rather than returning an error page on a
false cache hit, return the page?
ie with;
client -> Proxy A -(cache_digest_hit)-> Proxy B -> Internet
a) Proxy B does not want to spend money, so has miss_access deny proxy A.
Possible to have 'limited' miss_access? (ie, if its a miss *and* that
url could have been in the last cache_digest, fetch the url?)
*or*
b) Proxy A notices that it got 403 returned *and* the last line of the
returned page matched 'Generated.* by remote_proxy_name (Squid.*)', and
tries to fetch the url again from another proxy or directly.
Easily seen problems; With (a), ProxyA can cause a DoS on ProxyB since
ProxyB must look up each url in the previous/current cache_digest under
high load (cpu usage goes through the roof).
With (b), error messages can be changed (though usually the above regex
would work); ProxyA needs to keep track of which of its peers it
has/hasn't tried for that url. Also proxyA needs to lower proxyB's
priority slightly if repeated 403s are generated by proxyB.
Am I making sense here? ;)
--==--
Bruce.
Sysadmin, TheHub.
Received on Tue Nov 03 1998 - 22:43:36 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:56 MST