[gclist] Aging based garbage collection?

Andrew Shi-hwa Chen chenandr@cse.msu.edu
Wed, 18 Oct 2000 16:59:46 -0400


I remember seeing a while ago, on a web-site that I now can't find, a 
description of a GC algorithm as follows:

1) objects have an associated time-stamp (initially creation time)
2) a thread continuously does DFS from the root and updates the 
time-stamps
	of the objects it encounters (nodes that are accessed via
	a standard write-barrier both have their timestamps updated and are 
	considered part of the "root" set)
3) all objects are in a priority queue based on their age
4) whenever an allocation request occurs, it can lazily sweep the 
oldest
	objects away - that is, those they are so old that it is impossible 
that
	they could be reachable because if they were, they would have been
	updated by that thread that updates the time-stamps.

I thought the beauty of this, as opposed to the Baker treadmill or 
other incremental approaches, was that there is no single point in 
time when a large number of condemned objects are considered 
garbage*. Futhermore, the priority of the thread that does the 
updating is easily seen to be inversely proportional to the amount of 
memory left.

Has anyone else seen this before? Is there some better name for this? 
(And that's why I can't find any papers on it - because I don't 
remember what it is called?)


Thanks,
Andrew Chen
Michigan State University

* Is there some flaw in the logic that prevents this from being so 
(the gradual transformation from condemned to garbage), and if so, is 
that why I can't find that web-page anymore? AC