[gclist] Fast-allocating non-copying GC's

Paul R. Wilson wilson@cs.utexas.edu
Wed, 6 Mar 1996 04:48:42 -0600


>From owner-gclist@iecc.com Tue Mar  5 04:27:31 1996
>Subject: Re: [gclist] Fast-allocating non-copying GC's
>Sender: owner-gclist@iecc.com
>Precedence: bulk
>
>We now have a scheme where all of the free blocks in the store are
>either on a "small" free list (there are lists for each size of object
>from 1 to N). Or on a "large" free list for blocks over size N.
>
>Allocations are made contiguously from blocks on the large free list
>until all large blocks are consumed. Then allocations are made from
>the small lists. Normally > 90% of allocations are from the large
>free list, which leads to a good allocation rate.

I'm a bit concerned about this.  Chopping up big blocks into small
ones unnecessarily seems like generally a dangerous things to do.

Do you have any measurements of fragmentation when doing this?

>Two steps are used to reduce fragmentation:
>i) A fraction of Store is kept in reserve. This "clean" memory can be
>introduced for large allocations and/or to help with fragmentation.

This seems like a good idea.  I'm curious what a good strategy is
for keeping big free blocks around---how much do you want to keep
in reserve?  I don't think anybody has a good handle on this.

I've actually wondered whether something like this might be *good*
for fragmentation, combining some of the best features of best fit
and worst fit.

Best fit is good in that it never chops up bigger blocks than it has to.

Worst fit is good in that when you split a block, you always end up
with as big a remainder as possible, not some little unusable fragment.

The problem with worst fit is that it's too extreme---it systematically
whittles away at the largest block until it's not the largest anymore,
evening out the block sizes.  Then if you get a few requests for
larger sizes, you're out of luck.

Mixing (but not averaging) these strategies seems interesting.

In our experiments, best fit works like a charm---you usually have a few
large free blocks, not a bunch of little ones---but we'd like to have
a larger sample of programs.

(Anybody have any memory-intensive C or C++ programs that exhibit
serious fragmentation with a good allocator?  We're looking for
clean programs we can trace and experiment with.)

>ii) Normally GC is a simple mark-sweep but occasionally there is a
>compaction phase when fragmentation is deemed to be too high.

Any idea how often this actually happens?

> This
>compaction has to be done in the presence of ambiguous pointers so
>normally only a small amount of the store can be moved.

I would think you could move anything that wasn't directly referred
to by an ambiguous pointer.  I know of one GC that uses sliding
compaction, and simply slides movable objects past pinned objects.

>Steve.
>
>-- 
>   +---------------------- Stephen Hooton ---------------------+
>   | Dept. of Computer Science,    Email: hootons@cs.man.ac.uk |
>   | Manchester,                   Voice:     +44-161-275-6292 |
>   | England.                      Fax  :     +44-161-275-6236 |
>   +---- URL: http://www.cs.man.ac.uk/arch/people/s-hooton/ ---+
>