----------------------------------------
> From: gigoz_at_msn.com
> To: squid3_at_treenet.co.nz
> Subject: RE: [squid-users] Peering squid multiple instances.
> Date: Wed, 24 Mar 2010 07:12:15 +0000
>
>
> Dear Amos,
>
> Thank you for your response and better design tips. However i am not able to comprehend it well (due to lack of expereince and knowledge both however at current). So i request you to elaborate it a bit more. Your guidance would be a real valuable.
>
> Question 1:
>
> You said that under my configuration this is the case:
>
> Client -> squidinstance1 -> squidinstance2 -> (web servers)
>
> or
>
> client -> squidinstance2 -> webserver
>
> Well i am failing to understand how clients can talk to squidinstance2 directly when:
>
> 1. squidinstance2 is configured with an acl to accept traffic from localhost only.
> 2. On the Squid clients (browsers) the port 8080 of first instance is configured. And this is the only traffic that is being accepted through the iptables as well.
>
> according to my perception isnt this the case
>
> client ->squidinstance1 -> webserver
> client ->squidinstance1 -> squidinstance2 -> webserver
>
> Please guide me in this respect.
>
>
> Question 2:
>
> I have created multiple instances to run on the same machine ,because in my server there are three hard drives. OS is on Physical RAID1.Cache directory is on the third hard drive (comprising 80% of total space). This setup is done because i wanted to survive a directory failure. so even all my drives which are holding cache directories get failed. Even then my client will be able to browse the internet through proxy-only instance until the disk system holding the OS fails. I am not sure that whether this approach is correct or not but this is what i have learnt in these days through available faqs and ofcourse guidance through squidmail help. Please guide me on this.
>
>
> Question 3:
>
>
> what does it mean by "parent is the peering method for origin web servers"? also you wrote that by reason of Parent it does not matter which protocol you are using. Pleae guide me.
>
>
>
> Question 4:
>
> i interpret that you mean that two instances running on the same machine should have sibling type relationships configured identically with digest type protocol between them. It means that i should run two instances but pointing to different cache directories on my third hard drive and instead of 50 Gb big cache give lets say 25 Gb space to each.((Holding two cache directories on the same hard isnt it degrade performance ? so is it only possible when i have multiple drives for holding cache ))Both permitted to cache data from origin servers.However in case of a cache miss first check the sibling before going to the origin server. Am i correct in understanding you?
>
>
> You further said that for failover which i am sorry that i failed to understand at this point of time due to my current skill/competency. However i am eager to learn and determined to work hard. your detailed response will be really really valueable to me (I have just started a couple of weeks back). Please is the following setup is for failover of a whole squid proxy server or failover of squid processes?
>
>> * a cache_peer "parent" type to the web server. With "originserver"
>> and "default" selection enabled.
>>>> This topology utilizes a single layer of multiple proxies. Possibly with
>> hardware load balancing in iptables etc sending alternate requests to
>> each of the two proxies listening ports.
>> Useful for small-medium businesses requiring scale with minimal
>> hardware. Probably their own existing load balancers already purchased
>> from earlier attempts. IIRC the benchmark for this is somewhere around
>> 600-700 req/sec.
>>>
>> The next step up in performance and HA is to have an additional layer of
>> Squid acting as the load-balancer doing CARP to reduce cache duplication
>> and remove sibling data transfers. This form of scaling out is how
>> WikiMedia serve their sites up.
>> It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
>> benchmark so far for a single box reaching 990 req/sec.
>>
>> These maximum speed benchmarks are only achievable by reverse-proxy
>> people. Regular ISP setups can expect their maximum to be somewhere
>> below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
>> of remote servers. (well that part i understood)
>
> Question 5:
>
> can you please tell some good read for knowledge/concepts builder? I have get hold of squid definitve guide though a very good one however isnt'it a bit outdated.Can you recommend please? Specially on the topics of Authenticating Active directory users in squid proxy.
>
>
>
>
>
>
>
>
> ----------------------------------------
>> Date: Wed, 24 Mar 2010 18:06:46 +1300
>> From: squid3_at_treenet.co.nz
>> To: squid-users_at_squid-cache.org
>> Subject: Re: [squid-users] Peering squid multiple instances.
>>
>> GIGO . wrote:
>>> I have successfully setup running of multiple instances of squid for the sake of surviving a Cache directory failure. However I still have few confusions regarding peering multiple instances of squid. Please guide me in this respect.
>>>
>>>
>>> In my setup i percept that my second instance is doing caching on behalf of requests send to Instance 1? Am i correct.
>>>
>>
>> You are right in your understanding of what you have configured. I've
>> some suggestions below on a better topology though.
>>
>>>
>>>
>>> what protocol to select for peers in this scenario? what is the recommendation? (carp, digest, or icp/htcp)
>>>
>>
>> Under your current config there is no selection, ALL requests go through
>> both peers.
>>
>> Client -> Squid1 -> Squid2 -> WebServer
>>
>> or
>>
>> Client -> Squid2 -> WebServer
>>
>> thus Squid2 and WebServer are both bottleneck points.
>>
>>>
>>>
>>> If syntax of my cache_peer directive is correct or local loop back address should not be used this way?
>>>
>>
>> Syntax is correct.
>> Use of localhost does not matter. It's a useful choice for providing
>> some security and extra speed to the inter-proxy traffic.
>>
>>
>>>
>>> what is the recommended protocol for peering squids with each other?
>>>
>>
>> Does not matter to your existing config. By reason of the "parent"
>> selection.
>>
>>>
>>>
>>> what is the recommended protocl for peering squid with ISA Server.
>>>
>>
>> "parent" is the peering method for origin web servers. With
>> "originserver" selection method.
>>
>>>
>>> Instance 1:
>>>
>>> visible_hostname vSquidlhr
>>> unique_hostname vSquidMain
>>> pid_filename /var/run/squid3main.pid
>>> http_port 8080
>>> icp_port 0
>>> snmp_port 3161
>>> access_log /var/logs/access.log
>>> cache_log /var/logs/cache.log
>>>
>>> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only no-delay
>>> prefer_direct off
>>> cache_dir aufs /var/spool/squid3 100 256 16
>>> coredump_dir /var/spool/squid3
>>> cache deny all
>>>
>>>
>>>
>>> Instance 2:
>>>
>>> visible_hostname SquidProxylhr
>>> unique_hostname squidcacheprocess
>>> pid_filename /var/run/squid3cache.pid
>>> http_port 3128
>>> icp_port 0
>>> snmp_port 7172
>>> access_log /var/logs/access2.log
>>> cache_log /var/logs/cache2.log
>>>
>>>
>>> coredump_dir /cache01/var/spool/squid3
>>> cache_dir aufs /cache01/var/spool/squid3 50000 48 768
>>> cache_swap_low 75
>>> cache_mem 1000 MB
>>> range_offset_limit -1
>>> maximum_object_size 4096 MB
>>> minimum_object_size 12 bytes
>>> quick_abort_min -1
>>>
>>
>> What I suggest for failover is two proxies configured identically:
>>
>> * a cache_peer "sibling" type between them. Using digest selection. To
>> localhost (different ports)
>> * permitting both to cache data from the origin (optionally from the
>> peer).
>> * a cache_peer "parent" type to the web server. With "originserver"
>> and "default" selection enabled.
>>
>>
>> This topology utilizes a single layer of multiple proxies. Possibly with
>> hardware load balancing in iptables etc sending alternate requests to
>> each of the two proxies listening ports.
>> Useful for small-medium businesses requiring scale with minimal
>> hardware. Probably their own existing load balancers already purchased
>> from earlier attempts. IIRC the benchmark for this is somewhere around
>> 600-700 req/sec.
>>
>>
>> The next step up in performance and HA is to have an additional layer of
>> Squid acting as the load-balancer doing CARP to reduce cache duplication
>> and remove sibling data transfers. This form of scaling out is how
>> WikiMedia serve their sites up.
>> It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
>> benchmark so far for a single box reaching 990 req/sec.
>>
>>
>> These maximum speed benchmarks are only achievable by reverse-proxy
>> people. Regular ISP setups can expect their maximum to be somewhere
>> below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
>> of remote servers.
>>
>> Amos
>> --
>> Please be using
>> Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>> Current Beta Squid 3.1.0.18
> _________________________________________________________________
> Hotmail: Free, trusted and rich email service.
> https://signup.live.com/signup.aspx?id=60969
_________________________________________________________________
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969
Received on Wed Mar 24 2010 - 07:14:45 MDT
This archive was generated by hypermail 2.2.0 : Wed Mar 24 2010 - 12:00:06 MDT