[gclist] Allocation sizes under conservative collection.

Hans Boehm boehm@hoh.mti.sgi.com
Mon, 17 Nov 1997 10:38:17 -0800


On Nov 17,  8:22am, Charles Fiterman wrote:
> Subject: [gclist] Allocation sizes under conservative collection.
> I'm coming to the conclusion that allocation
> sizes under a page on a conservative collector
> should be only powers of two. The user should
> have an escape such as the ability to create
> memory pools of special sized objects.
>
> For most applications the waste seems to be
> negative. That is you use more objects on
> a page and that overcomes having waste inside
> of objects.
>
> Things like mark and gcBase run much faster
> using a shift rather than a divide or a
> lookup.
>
My impression is that there are a significant number of applications that could
really benefit from a special lightweight collector.  These are the
applications that typically keep < 1 MB or so of live data in the heap.  Such a
collector should probably use power of 2 sizes (assuming it's data structures
are similar to the ones I use, and each page contains only objects of one
size).

For larger applications, I'm not so sure.  Some of the UT numbers suggest that
power of 2 sizes really cost you quite a bit.  And at that point you generally
have many pages for each size class anyway.  So the real loss comes from a
certain type of phase-like behavior, where one phase needs size M objects, the
next needs size N objects, M and N q are not the same size class, but would be
if rounded to the next power of 2, and the coalescing strategy is insufficient.
 My impression is that's unlikely.  It didn't seem to be a problem for Cedar in
any case.

At least for the default configuration of my collector and for C/C++ programs,
powers of 2 also seem to be particularly bad size classes.  Many user-define
structures naturally have a size that's a power of 2.  In order to accomodate
interior pointer recognition and C pointer semantics, you need to add at least
one byte to that, giving you natural sizes of 2**n + 1 bytes.

One final issue is that wasting half of each object is often more expensive
than wasting half of each page.  In the former case, you often also get poor
utilization of cache lines, especially on machines with long cache lines.  This
is probably less of an issue on low-end and Wintel machines, which I believe
usually have short cache lines.  On the other hand, SGI's high end machines
have 128-byte cache lines, which tends to be larger than most small objects.

Hans


-- 
Hans-Juergen Boehm
boehm@mti.sgi.com