On 24-Aug-07 My Secret NSA Wiretap Overheard Nicole Saying :
>
> On 24-Aug-07 My Secret NSA Wiretap Overheard Henrik Nordstrom Saying :
>> On fre, 2007-08-24 at 14:20 -0700, Nicole wrote:
>>> > l2 = 256
>>>
>>> So, this should always be the same size?
>>
>> Yes, there is not much reason to change L2.
>>
>>> > L1 = at least cache_dir size * 2 / 256 / 256 / 13KB, or ca cache_dir in
>>> > GB * 2. (13 KB is the estimated average object size)
>>>
>>> ca?
>>
>> yes? (circa) I rounded it a bit.. it's not an exact math. As long as it
>> ends up in about those numbers.. L1 * L2 * L2 should be significantly
>> more than the number of objects you have in the cache, and L2 should not
>> be too big or too small.
>>
>>
>>> I guess I am missing something?
>>> 90000 * 2 / 256 / 256 = 2.746582 / 13000 = .0002112 ??
>>
>> You are missing an unit.. 90000 in the above should be 90000MB
>>
>> L1 = 90000MB * 2 / 256 / 256 / 13KB =
>> 900000 * 1024 * 2 / 256 / 256 / 13 = 216
>>
>>> Could you provide an example or 2?
>>
>> simplified formula:
>>
>> L2 = 256
>> L1 = cache_dir size / 500, rounded upwards on small numbers..
>>
>> If L2 is changed or you have a singnificantly different object size
>> distribution then use the equation above. This simplified formula is
>> only valid for L2 = 256 and average object size of about 13KB.
>>
>> Regards
>> Henrik
>
> Wow, excellent, thank you.
>
> However, I would have thought the directory sizing would have slanted
> smaller.
>
> With this:
># cache_dir aufs Directory-Name Mbytes L1 L2 [options]
> cache_dir aufs /cache0 24000 32 128
> cache_dir aufs /cache1 90000 64 256
> cache_dir aufs /cache2 90000 64 256
> cache_dir aufs /cache3 90000 64 256
>
> Each at about 80% of full (73 of 90G full)
>
>
> Holding:
> Internal Data Structures:
> 12450858 StoreEntries
> 116215 StoreEntries with MemObjects
> 116214 Hot Object Cache Items
> 12,449,836 on-disk objects
> Mean Object Size: 12.43 KB
>
>
> I only have: (same on all dirs)
> ls -l /cache2/02/00 | wc -l = 257 files per dir
>
>
> So, perhaps should the formula then add a / by number of cache_dirs?
> Does it perhaps apply more assuming a single cache_dir?
>
> Or, does squid just really prefer more dirs to objects per dir?
> On FreeBSD, with things like directory hashing and such, I am curious how
> much
> or who benefits from the larger tree.
> I would have thought it would like more per dir rather than less to keep
> the dir table lookups smaller.
>
>
>
>
> Thanks for helping me understand more!
>
>
> Nicole
Ok even weirder
Every server I have that uses aufs, has the same count of objects in the dirs.
Regardless of the stated disk usage level!
However servers that have the same basic setup have many more objects per dir
and seem to vary.
It seems like I am seeing that L2 is determining the maximum number of
objects per dir? Is that right? Which would make calulating the L1 (and or
L2) number that much more important with aufs, to allow the dirs to actually
fill to their maximum size.
Thanks
Nicole
Received on Fri Aug 24 2007 - 23:12:38 MDT
This archive was generated by hypermail pre-2.1.9 : Sat Sep 01 2007 - 12:00:03 MDT