When Squid reach several millions of objects per cache dir, it start
to be very CPU consumer, becuae every insertion and deletion of object
takes long time.
On my Squid 80-100GB had the CPU consumption effect.
Itzcak
On Tue, Sep 30, 2008 at 11:01 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> Rafael Gomes wrote:
>>
>> On Tue, Sep 30, 2008 at 12:36 AM, Amos Jeffries <squid3_at_treenet.co.nz>
>> wrote:
>>>>
>>>> Is it true that there are problems with Cache_dir more than 10GB?
>>>
>>> No. I have larger caches here. Some others have caches in the TB range.
>>>
>>> Only "cache_dir coss" specifically are known to have maximum size issues
>>> due to the format design. And not handle large files.
>>>
>>> There are some related issues known;
>>>
>>> You might need Squid built with --enable-large-files to get a 64-bit
>>> build if you intend to pass entire DVDs through Squid.
>>
>> So, if this options are ok in my binare is ok to handle large files?
>>
>>> Squid-2 has issues with handling of very large individual files being
>>> somewhat slow.
>>>
>>>
>>>> Many people talk about it, but I dont found any information in Squid
>>>> website. May be I didnt looking for right!
>>>>
>>>> So, it is true, will be a big problem, because with big hd, more than
>>>> 100GB, all to make cache. We will have problem with speed of write and
>>>> read in one HD.
>>>
>>> AUFS on Linux, or DiskD on *BSD should have no problem with that size.
>>> Just make sure there is enough RAM in use for a mem-cache and the file
>>> indexes.
>>
>> Why AUFS on Linux and DiskD on *BSD? What is the diference in those
>> System Operations?
>
> Something we still need to track down about the OS implementation and Squid
> usage of AsyncIO threads makes it work on Linux much faster than BSD. Next
> best speed-wise is DiskD, so thats still recommended for *BSD.
>
> Amos
> --
> Please use Squid 2.7.STABLE4 or 3.0.STABLE9
>
Received on Sun Oct 05 2008 - 14:38:16 MDT
This archive was generated by hypermail 2.2.0 : Mon Oct 06 2008 - 12:00:02 MDT