[gclist] zillion objects and gc
Boehm, Hans
hans_boehm@hp.com
Wed, 29 Aug 2001 09:31:48 -0700
> From: Nick Barnes [mailto:Nick.Barnes@pobox.com]
>
> At 2001-08-29 02:35:55+0000, "Richard A. O'Keefe" writes:
> > What if you use lazy freeing?
>
> Then it's not so bad. If you don't scan an object on the zero-count
> queue until you want to allocate it, then you're about to touch it
> again anyway. But totally lazy freeing has its disadvantages. For
> instance, it doesn't play well with best-fit allocation.
>
I think we've been here before. The main problem I have with lazy freeing
is that it seems to quickly get much more complicated than people often
realize. You may end with long chains of cons cells pointing to multiple
megabyte objects. You don't notice that the multiple MB objects are
available for reallocation until you've processed all the cons cells.
I think this means that fully lazy freeing isn't really guaranteed to work.
I don't know of a way to bound the heap size without eagerly freeing
everything before you expand the heap. And that means you've only reduced
the frequency of long pauses, not eliminated them. Or you're restricted to
fixed size objects like some of the original work.
You can probably bound the heap size and pause time with an argument similar
to that we normally use for tracing collectors, if you do a certain amount
of freeing per byte allocated. I haven't worked this through. But in a
sense this combines the disadvantages of a tracing GC with those of
reference counting: Tracing GC space overhead + space to hold the ref count
+ ref count time overhead + much of the complexity of a tracing GC +
problems with cycles. It makes more sense if you run the reference counter
on a different processor as in the PLDI 01 paper by Bacon et al.
Hans