Jon Kay wrote:
> > If someone has an cache in the range 50MB to 3GB it solves the problem.
>
> I'm missing something here. How does limiting max objsize to less than
> a tenth of a percent of cache size HELP ?!?!
Because this makes Squid bypass the cache for objects "close" to the
cache size.
> If I had a 3G cache, and my users are doing anything even slightly big...
> 4M is pretty much a tiny limit for cable/DSL users.
I am not arguing that 4 MB is not tiny. My only claim is that we still
need a limit until the real problem is addressed. Where such a limit
needs to be depends on
a) The size of the cache
b) The request load / likelyhood for parallell large replies
c) How much an "huge" object should be allowed to impact the cache of
smaller objects
> > If you feel happy with Squid swaping out 2GB objects to disk irregarless of
> > how large their cache actually is and then crashing because the disk got full
> > then so be it, but I don't see that as an option.
>
> We have that problem already, if people run 1MB configuration test caches.
> That's probably at least as common as the actual object bigger than disk
> space.
Pathological case. Even in such case the 4MB limit works fine as there
most certainly is more than 4 MB of actual free disk space for the
cache.
Regards
Henrik
Received on Wed Dec 19 2001 - 02:50:25 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:40 MST