Hi all,
since we have it already what about taking a softer stance? For
instance, subject it to an hard-coded maximum limit of a few
seconds/minutes, or display a strong warning to cache.log when a value
above a certain time limit is found during parsing..
On Fri, Jan 11, 2013 at 6:53 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> The negative_ttl directive is continuously causing problems. What it does is
> DoS all clients of a proxy when one of them has a URL problem. In modern
> websites which can present per-client error responses targeted at an
> individual client this can be a major problem.
>
> I propose dropping the directive entirely and following HTTP RFC guidelines
> about cacheability of 4xx-5xx responses.
>
> The one case I can think of it actually being useful is to prevent DDoS
> against a reverse-proxy. However, due to DDoS usually varying the URL anyway
> this is an extremely weak protection.
>
> Can anyone present any actually useful reason to keep it despite the
> problems it presents?
>
>
> Amos
>
-- /kinkieReceived on Fri Jan 11 2013 - 07:23:31 MST
This archive was generated by hypermail 2.2.0 : Fri Jan 11 2013 - 12:00:05 MST