> -----Original Message-----
> From: Didde Brockman [mailto:didde_brockman@mac.com]
> Sent: Thursday, June 09, 2005 4:42 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Development of SQL-based ACL?
>
>
> Hello users,
>
> I'm investigating the possibilities of using Squid in a large
> corporate environment. We have found the documentation to be rather
> complete, and after a period of evalutation two questions has come up
> and now I'm hoping you guys are able to help me out.
>
> 1. Are there any references available on _large_ projects
> incorporating Squid? "Large" being defined as roughly 100,000 -
> 200,000 clients 24/7. The numbers available on the site are outdated
> and seems to be focused on smaller solutions. I'm looking for some
> example hardware setups or basic numbers pertaining to the load of
> systems experiencing _heavy_ traffic.
>
This question pops up on occasion, but I have never seen it adequately
answered. Sadly, I'm not in a position to supply a fully informed
position. I will, however, share my setup and what information I have.
I work for an ISP that provides internet access to Rural Schools using
(primarily) satellite. I have somewhere in the neighborhood of 170 sites
which each has its own Squid caching server (to improve perceived
performance). Due to CIPA
(http://www.dpi.state.wi.us/dpi/dlcl/pld/cipafaq.html#Background), these
sites' internet access must be filtered, which we accomplish by using a
central proxy pool (currently three servers) and a proprietary filtering
solution. Originally I had them set up as distinct servers using ICP
peering, but some websites didn't gracefully handle HTTP "sessions"
originating from multiple IP addresses, so I set two of the boxes up to
parent through the third (a multi-cpu box which runs two squid instances to
handle the extra load). This architecture will be replaced this summer with
an LVS (www.ultramonkey.org), which Squid is reported to handle gracefully,
and should be (theoretically) scalable without limit.
There have been previous reports on the list of one Squid instance
(http://www.squid-cache.org/mail-archive/squid-users/200504/0876.html)
saturating a 45 Mbit link, (4500 users), and in a separate instance
(http://www.squid-cache.org/mail-archive/squid-users/200501/0374.html) 240
HTTP requests per second (2000 kBytes/s) from 3200 unique IPs.
My setup has seen peaks of 150 HTTP request/sec and 2500 kBytes/s. Average
response times (cache misses) from the internet to the proxy pool (over a
terrestrial link) hover around 100-150ms. For reference, most of the
internet is at least 50ms away due to geography. Hits average 10ms.
> 2. We would like to be able to administer ACL's through rules stored
> in a external database (SQL). Also, this would pair an IP with
> specific settings for a specific client, i.e. 10.0.0.122 is allowed
> to do this, but not that. Also, if client 10.0.0.122 requests a
> "forbidden" HTTP-resource its request should be redirected to a local
> page. Basically, these settings should be customisable for all users
> so they can acknowledge a forbidden resource, absorb the information
> on the local page (the target of the redirect) and then choose to
> continue on to their (forbidden) destination or not.
>
> For the second point, I'm assuming this would require us to contract
> a developer to make the necessary alterations the Squid's source
> code, so I'm also curious as to your experiences (if any) with known
> companies offering to do customisations on Squid.
>
> I would really appreciate any input on this as it sure would help out
> a lot. Sorry about all the typos and poor language, but my english is
> a bit rusty.
>
>
>
> Kind Regards,
> Didde
Chris
Received on Thu Jun 09 2005 - 12:36:24 MDT
This archive was generated by hypermail pre-2.1.9 : Fri Jul 01 2005 - 12:00:02 MDT