[gclist] Allocator algorithms.

Charles Fiterman cef@geode.geodesic.com
Mon, 20 May 1996 14:32:28 -0500


Under the conservative collector operating at a page level
is a storage allocation algorithm. By operating at a page
level I mean that requests smaller than a page are done by
breaking up a page taken from the allocator.

Anyway we are unhappy with circular first fit. It seems
to create too much fragmentation. Does anyone have any
suggestions. We are looking for Cartesian tree agorithms
and best fit.

An interesting story of how simple probabilistic thinking
can go far astray. If you allocate large objects out of
4K pages you would expect the average loss to be about
2K. It is in fact far higher.

Programs often allocate in blocks that are multiples of
4K, they know about page sizes too. The collector adds a
byte at the end so that pointers one off the end of an
object will keep it alive. 4K + 1 needs a block of 8K.