[gclist] Finalizers & Reference counting.
David Detlefs - Sun Microsystems Labs BOS
David Detlefs <david.detlefs@east.sun.com>
Tue, 27 Aug 2002 10:49:50 -0400 (EDT)
Charles --
I'm not sure about the structure of your argument here. You sometime
seem to be arguing from current practical concerns, as in:
> Suppose you are GotRocks Inc and can have lots of processors and lots of
> storage but you must finish 90% of transactions in 1 second and the rest in
> 10 seconds. Suppose you are looking at a back end database server.
>
> Now the costs of reference counting can be beaten by having more
> processors. But the costs of collection are a huge lump right where it
> hurts.
Well, depending on your definition of "lots of storage" and "huge
lump", certainly concurrent collection can help a lot. For example,
we see applications with 500 MB of live data, and GC pauses of about
100 ms. (On older hardware!) This seems consistent with results from
other concurrent GC implementations.
I don't know what you mean by:
> Collection can only be subdivided so far because of cross pointers.
In our version of mostly-concurrent collection, collection is done on
a separate thread. The "cross-talk" is mutator updates "behind" the
collector thread. Certainly this causes us (and other
mostly-concurrent collectors) to need to stop the world eventually,
but a variety of techniques can decrease the duration of that pause.
And it can be parallelized, as Hans noted.
At other times you seem to be arguing from asymptotic complexity:
> Actually I was assuming the number of processors increased with memory but
> that they couldn't fully divide collection. That is each processor would do
> part of collection but they would have to cross talk and the cross talk
> would increase without bounds.
This sounds like an argument on theoretical limits to parallel
collection. And it's probably a good one, but this seems to me a
practical problem, unlikely to hit theoretical limits for quite a
while. In practice, as Hans has noted, GC parallelizes quite well.
Which is not to say that reference counting isn't interesting! For
many applications with very large heaps, almost all of the heap will
be "static", in the sense of long-lived, with a very slowly changing
pointer graphs. Another portion of the heap will be "churning" for
shorter-term stuff (which could still be lifetimes of hours...) If we
need to trace the large graph of static data every time to determine the
reachability in the active data, that could be costly compared to a
technique like RC, which does work proportional to the rate of
mutation.
But again, the constants matter, and it may be a while before this
argument holds practical weight.
Dave