On Mon, 2003-09-29 at 08:18, Joshua Brindle wrote:
> >On Mon, 2003-09-29 at 07:29, Joshua Brindle wrote:
> >> He's right, it will work but the loopback trigger will happen and be logged,
> >> What I found easier was using a simple proxy for the outer proxy so that
> >> you don't have the caching overhead, and using squid internally using
> >> ACL's..
> >
> >Which suffers because the dansguardian policy no longer applies per
> >request. Thus my suggestion for no-caching on the inside, caching on the
> >outside. And (as is in the FAQ) two squid will run -just fine-, be
> >simpler to debug, and have more useful logs.
> >
>
> I'm not sure i follow. They should still apply per request since the header
> will still be there and the external ACL (which checks for an existing login
> in a database for the IP trying to visit the naughty site) should still get
> run and see the X-Naughty header (right? or am I off here?)
You need to partition the responses in this case - Chris has multiple DG
setups for different users. So you'll need quite a complex environment
to keep the sites separate per user. )Not impossible, just quite
complex.
> My thought here, and it might be wrong, is that if the page is cached on the
> inner squid then dansguardian doesn't have to waste CPU time re-analyzing
> the same content, and the header I add should be preserved in the cached
> page (right?) . If it's cached on the outer one, dansguardian will have to re-
> analyze it (this isn't exactly right since dansguardian maintains an in memory
> url cache, but squids cache should outlive dansguardian's.
The header will be cached yes, in the reply.
Rob
-- GPG key available at: <http://members.aardvark.net.au/lifeless/keys.txt>.
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:20:03 MST