On 10/02/2013 04:02 AM, Amos Jeffries wrote:
> On 2/10/2013 10:02 p.m., Jérôme Loyet wrote:
>> I'm facing a particular situation. I have to set-up a squid cluster on
>> 10 server. Each server has a lot of RAM (192GB).
>>
>> Is it possible et effective to setup squid to use only memory for
>> caching (about 170GB) ?
> memory-only caching is the default installation configuration for
> Squid-3.2 and later.
>> What directive should be tweaked ? (cache_mem,
>> cache_replacement_policy, maximum_object_size_in_memory, ...). The
>> cache will store object from several KB (pictures) up to 10MB (binary
>> chunks of data).
Please note that the whole Squid slows down while serving a large object
from memory cache due to old but recently discovered inefficiencies. The
exact definitions of "slow" and "large" depend on the environment, but
significant problems have been observed for memory-cached 10MB responses
in some cases. YMMV. You may want to test this aspect of your Squid
deployment.
> With memory cache holding under-32KB objects SMP workers would be best.
> They share a single memory cache, but it is size limited to 32KB memory
> pages due to Squid internal storage design. I'm not sure yet whether the
> work underway to extend Rock storage past this same limit is going to
> help shared memory cache as well (hope so, but dont know).
Yes, Large Rock (in the Collapsed Forwarding branch) has lifted both the
shared disk and shared memory cache limits for object sizes.
HTH,
Alex.
Received on Thu Oct 03 2013 - 14:48:03 MDT
This archive was generated by hypermail 2.2.0 : Thu Oct 03 2013 - 12:00:06 MDT