[gclist] Compiler specialized GC's

Daniel C. Wang dw3u+@andrew.cmu.edu
Wed, 21 Feb 1996 15:39:38 -0500 (EST)


boehm.PARC@xerox.com writes:
> "For example, if you don't mind losing modularity, one
> could imagine a compiler analyzing all the different kind of heap objects a
> program could allocate and hard coding a specialized GC that was specialized
> to collect the object, rather than interpreting bitmaps or other type
> information at runtime, one could also imagine using specialized copying
> routines too, if you're gc scheme need to copy."
> 
> There are no doubt many implementations that do something along these lines.
> SRC M3 is one of them.  The problem is it can easily turn into a pessimization
> rather than an optimization.  Bitmap interpretation is fairly cheap and you get

{stuff deleted}

I guess, specializing bitmap interpretation was a rather naive example. I
was also wondering if compilers could improve GC performance by doing more
complex analysis of the heap objects. So assuming we have a copying GC that
can use both the Cheney copying GC algorithm, and the modified non-copying
version that uses lists of free blocks. It seems that it would be possible
for a compiler to emit code to allocate and collect some heap objects of
fixed size, using the non-copying free block implementation.

My impression is that no one GC scheme is optimal for all heap objects,
perhaps a smart compiler could derive and generate the right kind of hybrid
GC system that optimizes GC performance for a particular program. 

BTW Someone pointed me to a PLDI'92 paper by Diwan and Moss, that gives a
rough estimate for the cost of tracing roots in a precise copying GC system,
it's on the order of 1% of total GC time. Is this about the same order of
magnitued for conservative schemes?

+----------------------+
|     Daniel Wang      |
|                      |
| dw3u+@andrew.cmu.edu |
+----------------------+