Duane Wessels wrote:
> I didn't look at the code yet, but my first reaction is that
> this belongs as a replacement policy. An extra scanning event
> may be expensive. Wouldn't it be better to have a single policy
> that favors large files over small ones, and combines the size
> with other parameters such as age, refcount?
Well, yes and no. The problem as I see it is that unless you move the
big objects around in the store_list, you have to skip them anyway on
each expiration pass. My gut feeling tells me that skipping all those
entries each second would be prohibitively expensive (for a 100GB disk
cache with 600000 objecs, you would be able to store 100000 1MB objects
and 500000 small ones, if I calculate this right, and you would probably
wind up scanning all 100000 every time).
A solution would be to create a seperate store_list for big objects, to
be populated by the expire run (because the size may not be known in
advance). My gut feeling is that this would complicate matters even
further.
The code I sent doesn't have any slop, so the scan is probably too
expensive at the moment (it is obviously easy to set the cleanup target
somewhat higher). The code also doesn't have provision for the situation
where *all* the objects are large ones, which is clearly undesirable as
well.
Cheers,
-- Bert
-- Bert Driehuis, MIS -- bert_driehuis@nl.compuware.com -- +31-20-3116119 Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.Received on Tue Mar 21 2000 - 06:01:53 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:22 MST