If you're using WCCP to balance requests across multiple caches then no
cache member will /ever/ have an object that another cache wants.
To copy from my original message:
> Cache peering is counter-productive in your environment. WCCP will
> divide traffic in such a way that data sharing is not required--it
> splits traffic based on destination IP of the client requests so every
> cache will contain unique data.
The key in that paragraph (which I guess I should have put before the
inflammatory statement that cache peering is counter-productive, since
it blinded you to the rest of the paragraph ;-), is that the traffic is
divided based on destination IP from the client request--so there never
be a request for the same data from more than one cache.
What I've just said isn't always true--and I'm sure someone will pipe up
with why it's not if I don't point it out now. The 'nevers' in the
above statements are not true if you take one cache out of the group or
add a new cache into the group--or if you change the IP ordering of the
web caches in the cluster. In any of those cases (which are isolated
and shouldn't happen on a regular basis) there will be some overlap of
cache data among the caches--and requests for objects that are already
in one cache may reach another cache. In which case cache peering might
be useful for about two days--and then it would become
counter-productive again.
Cache peering then, is a complete and utter waste of cycles and local
network bandwidth. Every peer request will always come back as a MISS.
Always. No exceptions (except the silly exceptions mentioned in the
previous paragraph). So every miss on a local cache will have to wait
for misses to come back from its peers before going out to the origin
server. We don't want that, as it is counter-productive.
Make sense now?
francisv@dagupan.com wrote:
> Joe,
>
> I don't understand why cache peering is counter-productive here. If a cache
> member has the object, why not share it among other cache members?
>
> -----Original Message-----
> From: Joe Cooper [mailto:joe@swelltech.com]
> Sent: Thursday, December 06, 2001 10:18 PM
> To: SysAdmin
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] SQUID and WCCP V1 in an ISP setup
>
> SysAdmin wrote:
>
>
>>I am working in an ISP environment where there are four links to the
>>internet world. All are not of same capacity. I am running BGP to send
>>traffic through various links. Now, for the past couple of months I am
>>struggling to setup a transparent proxy in my network. Some how I am
>>ending up with troubles n customer pressure. I nearly have 4 and half
>>megs to serve and at any point of time my customers online (no of pcs is
>>touching aroung 700 to 800 ps).
>>
>>
>>
>>I have cacheRAQ4 with me with 20GB HDD 512RAM + DS10 with 512RAM n 18GB
>>HDD (On Digital Unix) + Intel P-III, PC 512RAM 8GBHDD. All the three
>>are running squid and I would like to make a optimal sharing among the
>>cache servers.
>>
>
>
> WCCP will evenly divide traffic amongst your servers automatically,
> though your really don't need three servers to provide 4.5 Mbits of
> throughput. One (assuming you give one of your boxes a second HD) will
> do it just fine, or two for redundancy.
>
>
>
>>can any one suggest how do I go about? Is the port redirection mechanism
>>necessary if the cache listens on port 80? Is the sibling relation among
>>cache servers increases the performance? How do I tackle the latency
>>issue...
>>
>
>
> Your Squid can listen on any port--but you need to redirect port 80
> traffic to the cache port (whether Squid is listening on port 80, or its
> default of 3128 or anything else). Use ipchains or iptables for this task.
>
> Cache peering is counter-productive in your environment. WCCP will
> divide traffic in such a way that data sharing is not required--it
> splits traffic based on destination IP of the client requests so every
> cache will contain unique data.
>
>
-- Joe Cooper <joe@swelltech.com> http://www.swelltech.com Web Caching Appliances and SupportReceived on Thu Dec 06 2001 - 08:45:21 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:05:15 MST