In article <199610160553.WAA02807@nlanr.net> you write:
>I'm testing it right now with a list of 30,000 URLs. I don't get the
>same results--most of my requests are getting through.
>
>But the Squid process is taking 95% of the cpu.
>
>Seems like even a list of 4,000 is too many to put in the main Squid
>process. Would probably make more sense to put them in the redirector
>process, no? Then you might even do some fancy hashing, etc. to speed
>it up.
The simple speedup patch I made only works for ACLs where you have to
check that an entry *is* present in the list (e.g., whether an IP address
has access to the cache). It uses the fact that most requests from one
IP-address come in bursts and it implements Last Recently Used sorting of
the linked list.
When you have ACLs where you have to check that an entry *is not* present
(e.g., blocking certain sites on the Internet) my speedup patch absolutely
does not work because it still has to check the whole linked list :-(.
I've been playing with balanced binary trees recently and I'll see
if I can add this in the near future to Squid's ACL code (unless somebody
already started implementing something like this of course).
Arjan
Received on Wed Oct 16 1996 - 12:36:02 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:17 MST