On 27 Oct 2000, at 17:34, Alex Rousskov <rousskov@measurement-factory.com> wrote:
> On Sat, 28 Oct 2000, Andres Kroonmaa wrote:
>
> > from this I tend to conclude that sizeof() returns sizes suitably aligned
> > for use in arrays.
>
> Sure, there is no argument here. The problem is that you cannot stuck
> a pointer inside an element of an array of an arbitrary type. That is,
>
> *(char**)(array+i) = ptr_to_something;
>
> is not safe for some types of arrays on some machines. Just like
>
> *(double*)(array+i) = 1.0;
>
> is not safe. An example of a dangerous array would be an array of
> "struct { char buf[9]; }".
hmm, I guess you mean that if I have an array of 9-byte items, then
I'll blow it if I try to insert double at first byte of each item.
I understand. But I'm not trying to implement generalpurpose mempools,
I'm trying to implement mempools optimised for squid only.
Do date, not a single pool has ever used such sizes, only pointersize
aligned sizes have been passed to mempools. Given that such pointer
stuffing gives any benefit at all only for pools with millions of items,
I think we can try to optimise for these cases, and accept some overhead
for others, by aligning items on double boundaries, for eg. in example
above, mempool would reserve char buf[16]; instead of char buf[9];
this would make it no different from that of malloc() alignment.
Actually, as we are not going to put doubles into array, we can relax
alignment for 32-bit systems to 4-byte alignment.
> > why not? lets say pointer is 8bytes, by rounding up size to next multiple
> > of 8, we get good alignment, don't we? If we really don't risk, we can
> > round up to 16, which is what malloc does. then alignment issues would be
> > no way worse than with malloc().
>
> As far as I understand, you want to stuck a pointer in an element of
> an array. If you mean that the relative position of a pointer inside
> the element will vary depending on the element size and index, then
> yes, you can do it. However, this sounds like an ugly hack that is not
> worth the benefits. Again, I may be misinterpreting your intentions.
the only benefit is to avoid a pointer overhead per each pooled item.
that is 8MB of overhead per 1M items (64-bit systems). Given that we
have 3 huge pools (StoreEntry, MD5, heap_node), we currently suffer 24MB
overhead per 1M objects. Given 4M objects, we suffer ~100M overhead
just for keeping mempool pointers. Without chunking, malloc itself is
adding another 100M overhead. We can have 200M wasted for 4M objects.
Its about 20% of total memory used. I think this is excessive. disks
are so much cheeper than ram and systems that can handle multi-GB ram.
stuffing pointer into free items seems most efficient, and for vast
majority of cases seems to come natural without a need for any hacks.
Other option I thought of was to use bitarray with 1 bit per item to
keep track of free items, but I think it would be simpler and mostly
more efficient to simply round up sizes and align items to sizeof(ptr)
we can get xxxGB of disks in a cheap box and wish to handle yyM objects
with ram limit of 1-2G, that is why I'm in the mode of reducing ram usage
so much.
------------------------------------
Andres Kroonmaa <andre@online.ee>
Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Tue Oct 31 2000 - 03:52:07 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:53 MST