[gclist] Time-stamping objects...
Paul R. Wilson
wilson@cs.utexas.edu
Mon, 15 Apr 1996 14:36:24 -0500
>From majordom@iecc.com Mon Apr 15 10:56:18 1996
>To: gclist@iecc.com
>Subject: Re: [gclist] Time-stamping objects...
>>
>> Hi there!
>>
>> As you might remember I'm doing research on programming languages,
>> especially on Oberon-2.
>>
>> I recently found a paper about side-effect free functions. It argued
>> that you could dynamically ensure side-effect-freeness by time-stamping all
>> dynamically created objects, thereby getting more "functional" in the
>> framework of an imperative language.
>>
>> Now, as I find this approach quite good for the "function thing", I was
>> wondering if there are efficient GC algorithms that need such a time-stamp
>> too. I could thus achieve two goals with one mechanism.
Several kinds of GC algorithms need time information per object of some
sort or another. Generational GC's need to be able to tell young objects
from old ones. In copying GC's, this is often done by separating young
objects spatially, and using an address test, but some systems (either
copying or non-copying) encode generation information in a field in
object headers. (Some others ensure that all of the objects in a page
are of roughly the same age, and keep per-page age information.)
Several kinds of distributed garbage collectors use timestamps, but these
are timestamps having to do with the GC's traversal of the reachability
graph, not the actual ages of the objects.
>> Oberon-2 normally uses mark-and-sweep GCs. I understand that there are
>> "generation-based" GC algorithms? Are they "better" than a "simple"
>> mark-and-sweep?
Lots of things are better than a simple mark-sweep GC. One is a non-simple
incremental mark-sweep GC. Another is an incremental GC of any of several
sorts: copying, fake copying (like ours), mark-sweep, or mark-compact.
>Generational collectors eliminate fragmentation and if you have enough
>storage are more efficient.
This is incorrect. Being generational is orthogonal to whether the
GC compacts data. Not all generational collectors are copying collectors,
and the best ones are not pure copying collectors.
See my GC surveys on our web site for more on these issues.
> In fact as the amount of storage increases
>copying collectors approach zero overhead.
Yes, but it approaches zero fairly slowly, and you reach a point of
diminishing returns. Copying collection is overrated. (Which is not
to say I'm against it, just that other systems have virtues too.)
> If you never reach the end
>of the space copying collectors have zero overhead. With this in mind
>it makes sense to build execution frames on the heap.
I'm not sure I follow. If you put activation information on the
heap, you're pretty much guaranteed to reach the end of any reasonable
amount of memory in a hurry, and inflate GC costs somewhat by forcing more
frequent GC's.
>As a disadvantage they need more space to be useful and its hard to
>interface with things like C that expect stuff to stay put.
They're also more difficult to make real time, or to parallelize.