On Tue, Apr 17, 2001, Andres Kroonmaa wrote:
> that have very many chunks to search. I have no idea where could the
> threshold be between adding overhead and becoming more efficient.
> But for sure with a pool with less than some 4-8 chunks searching via
> trees or hashes would only add overhead. And that is most of the time.
>
> So, to be most efficient, we'd need to decide based on number of chunks
> whether to use trees or plain linklist search.
>
> There are only very few pools that would benefit from trees - mainly
> storeEntry, MD5, and LRU pools. And these are generally very longlived
> object, being freed quite rarely, so overhead of plain search is quite
> small on average.
Have you tried it yet?
> > I would suggest using a tree with about 5 branches per internal node.
>
> In squid we have only splay? Did you mean that? I'm not like writing
> any tree code myself.
>
> > Or you could use garbage collection to hide the overhead from normal
> > operations. Simply put the free nodes in a bucket, and only look in
> > detail if you need to actually reclaim memory.
>
> basically cache of frees. I was thinking on that path. Only we'll
> loose stats per chunk.
Perhaps.
I dunno - there's going to be a _lot_ of papers covering this
exact issue.
I'll go and profile/bugcheck the chunked mempool branch to see if
its stable enough to hit 2.5. I'll then go hit the net for some
research papers covering memory allocators, and see how this
could be made faster. We're _going_ to need a fast memory allocator,
and we're going to have to cut down on the number of allocations
made in the request path to bump down the CPU requirements..
Adrian
-- Adrian Chadd "Two hundred and thirty-three thousand <adrian@creative.net.au> times the speed of light. Dear holy fucking shit."Received on Tue Apr 17 2001 - 03:33:43 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:47 MST