On 30 Oct 2000, at 23:34, Henrik Nordstrom <hno@hem.passagen.se> wrote:
> Andres Kroonmaa wrote:
>
> > > Well, there are still race conditions where the memobject can
> > > temporarily grow huge, so don't bet on it.
> >
> > what race conditions should I keep in mind?
>
> The first one that pops up in my mind is when there are two clients to
> one object which as of yet is marked as cachable, and one of the client
> have stalled or is a lot slower than the other. There has been a couple
> of other bug related ones in earlier Squid versions, and there quite
> likely is more to come..
ok, so I have to implement release of free chunks. Can make it selftuning
to some extent, by say keeping track of last referenced time. If a chunk
is unreferenced for say 2-10 secs, then we release it to the system. But
keep at least 1 or few chunks.
> > I've put together a version of mempools with chunked allocations.
> > With 2.5M objects, and chunk size of 8K, I got very many chunks to handle.
> > I thought I'd increase chunk size for some specific pools, like StoreEntry,
> > MD5, heap_node, to 256K so that dlmalloc would place those onto mmaped area.
> > It does this, but quite selectively. For some reason it seems that it tries
> > to avoid using mmap, even for large allocations.
>
> The default threshold for Linux glibc is apparently 128 KB.
> See glibc/malloc/malloc.c
same for squid/lib/dlmalloc.c
> > Now I wonder if it might be actually a bad idea to have too many mmapped
> > allocations for some reason. Any comments on that one?
>
> Only that there is a higher overhead in wasted memory due to page
> alignment of the allocated size + malloc headers, and that there is a
> considerably higher cost in tearing up/down a mmap() than rearranging a
> internal pointer in the data segment..
no OS limit on amount of mmapped objects? could we mmap all 3G in 4K
pages without problems?
I imagine that OS puts a nonaccessable page between two mmaps to catch
overruns, so 4K pages would waste VM address-space. But other than that,
no other problems from OS side?
To avoid the overhead, maybe we should mmap() from mempools instead of
xmalloc? Or is this a very bad idea?
higher cost of mmap is acceptable with chunked pools, because we use
the same mmap for very many items, and tearing up/down happens rarely.
------------------------------------
Andres Kroonmaa <andre@online.ee>
Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Tue Oct 31 2000 - 04:16:51 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:53 MST