On 19 Oct 2000, at 21:45, Henrik Nordstrom <hno@hem.passagen.se> wrote:
> > If we need to preallocate a bunch every time to grow the pool, then why
> > do that one object at a time, instead of allocating a chunk and splitting
> > it internally? Why incur 16-byte overhead of libmalloc for each item of
>
> Because we might want to be able to maintain a highwater mark on the
> amount of idle memory. Doing that in chunked allocations is not trivial.
In practice idle memory is low. Even on busiest cache I've never seen idle
memory ever reach even close to default idle limit of 50MB.
> > For this we'd need to create all pools at the same time, and only once.
> > After running for awhile system memory will be fragmented by zillions
> > requests for URL strdups and frees, and next time you preallocate at a
> > time, you won't get any similarity to chunked allocation. Individual
> > object proximity will be absolutely random.
>
> Here I disagree. Sure there will be some fragmentation, but not a
> zillion. With the proposed pattern malloc should be quite effective in
> limiting the fragmentation by self organising.
For every request, we do tens of strdups, variable size. easily gets
to zillions over time ;) these strings live different times, all depending
on service times, concurrent requests, etc. Next time you do bulk alloc
for a pool, libmalloc will fill all the gaps that have appeared between
these strings, and the pool will not be in consequtive memory space.
Also, as time between idle release and next bulk alloc is large, all this
can get only worse. Eventually you'd end up with pretty random locations
for every pooled object. I can't see a way to avoid that without allocating
memory in chunks.
> What I forgot to say was that memory should be freed in bunches as well,
> where several entries are freed at one, not one at a time.
But I can't see how going from allocation on-demand to allocations in
bulk of same small individual objects can change anything. Only pools
allocated at startup and then never released could get sequential memory,
and at free time leaving sequential free-space. Freeing fragmented space
in bulk doesn't help anyhow, imho.
By putting memPools onto chunks I'm trying to address this:
- reduce malloc overhead for small objects (2-50 bytes)
- reduce fragmentation of allocated space
pack small objects tightly, so that randomness of memory access is
reduced. should help to reduce paging, nice to CPU-caches.
- allow large chunks to be malloced by mmap, removing these pools away
from heap fragmentation problem.
- make chunks full-page sized and page-aligned, ideally all equal size.
- account for memory consumed more precisely.
- reduce freespace fragmentation. either directly, or indirectly.
for eg. 4-16K buffers are very good candidates to be used in large
mmapped chunks. also StoreEntry db, MD5 strings, StoreMem.
> > Also, freeing from pool tail is only adding fuel to the fragmentation.
>
> Again I disagree. What I meant with the tail here is the tail in the
> sorted list of idle allocations, so high memory gets freed before low
> memory and low allocations gets used before high ones.
thats a good idea. can be applied to chunks as well.
> > I don't believe that preallocation in bulk with current memPools will
> > solve fragmentation.
>
> In the production servers I have had fragmentation hasn't actually been
> a big issue. Most of the allocated memory has actually been in use, even
> if not all is accounted for in memory pools.
I try to consider both, free-space and allocated fragmentation. they are
related. Maybe fragmentation of free space isn't much of an issue, perhaps.
But keeping related stuff closely together may have indirect effect on
performance, and also reduces chances of sudden bursts of free-space
fragmentation.
> > Basically, we might not reach ideal solution with chunked pools also,
> > but Pool fragmentation and overhead can be accounted for in memory util
> > page. I think this alone is already a useful feature. Currently we
> > have some 50% of squid size in some shadow, unaccounted area.
>
> Yes, but is it free (fragmented memory) or is it in use by Squid for
> uncounted structures? The mallinfo stats should tell.
How can I say? normally its looking good:
Memory usage for squid via mallinfo():
Total space in arena: 313047 KB
Ordinary blocks: 304139 KB 370623 blks
Small blocks: 0 KB 0 blks
Holding blocks: 20336 KB 17 blks
Free Small blocks: 0 KB
Free Ordinary blocks: 8907 KB
Total in use: 324475 KB 104%
Total free: 8907 KB 3%
Memory accounted for:
Total accounted: 207924 KB
3% fragmented? not a big deal. but this box has run for only 2 days.
I think I've seen quite awful picture occasionally. I had concluded
then that lost memory has gone to fragmentation, because we don't
know of any memory leak inside squid. But now if I have to back this
up, I guess I'd be in trouble... Maybe dirty startups, when loading
of storedb takes long time, memory gets fragmented alot, as StoreEntry
objects are very longliving, and while loading them lots of other
stuff flies by. Maybe some kind of spikes in activity cause fragmenting,
but sometimes squid size skyrockets, and stays for a long time higher
than normal.
------------------------------------
Andres Kroonmaa <andre@online.ee>
Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Fri Oct 20 2000 - 06:27:46 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:51 MST