tor 2007-06-21 klockan 10:54 +0100 skrev lists-squid@no-spam.co.uk:
> In concept, I'm aware of the difficulties of content filtering, but I've
> come to the conclution that the main show stopper for this sort of setup
> is bandwidth. Each household configures their DSL router to proxy thought
> this squid proxy, meaning that each household's bandwidth usage will add
> to the bandwidth usage of the proxy server.
Yes.
> One way around this would be to have a whitelist of domains (bbc.co.uk,
> wikipedia.org) for which squid would "forward" the http request straight
> to the destination servers, re-writing the tcp headers so that the
> response from the destination would go straight back to the client, thus
> saving a vast amount of bandwidth at the squid proxy level. In effect,
> the squid proxy would only come into play when the requested URL is not in
> the whitelist, saving precious processing power and bandwidth.
Not as easy as it sounds unfortunately.
As the client is configured to talk to the proxy the TCP endpoint is the
proxy, not the origin server. It's not possible to have that connection
redone so that the origin talks directly to the client as the client
knows nothing about the origin server..
And even if you used transparent interception instead you would only
know the destination after accepting the TCP connection and reading the
request, running into quite similar but different problems..
> I know it's possible (and perhaps written in stone in an RFC) to have the
> client maintain a proxy exclusion list, but that would be unmanageble in
> this sort of setup.
Is it? You use a centrally provided proxy.pac to control the browser.
You don't need a complete whitelist in the proxy.pac, just sufficient to
avoid wasting too much bandwidth.
Regards
Henrik
This archive was generated by hypermail pre-2.1.9 : Sun Jul 01 2007 - 12:00:04 MDT