On fre, 2007-08-31 at 14:37 -0400, alexus wrote:
> so, if i understand correctly i can do something like this...
>
> url_rewrite_access rw
> url_rewrite_program /usr/local/squid/bin/_myrewriteprogram.sh
> acl rw dstdomain www.ebay.com
>
> like this?
almost...
acl rw dstdomain www.ebay.com
url_rewrite_access allow rw
url_rewrite_program /usr/local/squid/bin/_myrewriteprogram.sh
> let's say according to my acl "rw" i sent url for example www.ebay.com
> to url_rewrite_program and it rewrote it, fed it back to squid and
> squid executed that url, which after doing whatever it needs to do it
> sends user back www.ebay.com so he can resume he's activity on ebay,
> but wouldn't squid go into loop again? i mention last time about
> urlgroup, i'd want my rewrite program to change urlgroup as well so
> that when squid wouldn't loop and i can change urlgroup on the way out
> of url_rewrite_program.
Tue url_rewrite program rewrites the URL while Squid processes it. Squid
only sends each request once to the url rewriter. The result of the url
rewriter is used as-is.
The urlgroup thing can be used for two purposes
a) It splits the cache, allowing you to keep two or more different
versions of the same URL in the cache.
b) Can be used in further acl based processing, such as
cache_peer_access, never/always_direct etc.
Note: The main reason why urlgroup splits the cache is to allow it to be
used in cache_peer_access in reverse proxies without risking cache
pollution.
Regards
Henrik
This archive was generated by hypermail pre-2.1.9 : Sat Sep 01 2007 - 12:00:04 MDT