[gclist] Finalizers & Reference counting.
Andrew Shi-hwa Chen
chenandr@cse.msu.edu
Tue, 20 Aug 2002 15:42:56 -0400
> As systems expand the time taken by collection becomes a fatal problem. One
> company had contracts that said 90% of transactions must be done in 1
> second and the rest in 10 seconds. When the system expanded past 1 GB they
> started to fail due to garbage collection time. Hacking got them past the
> problem but at some point such guarantees must fail. At some point
> collection times must exceed any given limit.
I concur. So, we actually have four choices:
1. Don't bound collection times (give up on realtime guarrantees).
2. Don't collect (give up on garbage collection).
3. Don't scale past the limit (get a promise that the system won't
expand).
4. Don't have the collection be noticable (use concurrent
collection).
Of course, no one is happy with any of these, except maybe (4) which
is why it's such a wonderful research area, but ultimately it too has
problems due to memory and/or address range limitations (the revenge
of (3)), thrashing (the revenge of (1)), or correctness issues (the
revenge of (2)).
> Reference counting has bugs, it can't deal with cyclic data structures, it
> has excessive overhead but it has some striking advantages. There are no
> serious SMP bottlenecks, it never has to stop the world. Some languages
> like the latest Perl use reference counting and have garbage collection as
> a backstop.
People give interpretted languages a bit of a performance leniency -
they say "oh, it's interpretted, we can't expect it to be as fast as a
compiled language" and so a little more or a little less book-keeping
overhead winds up being not such a big deal.
But we run into problems with languages that try to attain the
mythical "zero-overhead" garbage collection in a "fast as compiled
language" environment. People will all of a sudden start to say
"locking overhead for reference counting is too great" or "we can't
afford the extra word for the reference count" or stuff like that.
If anything, this kind of mentality is, to me, why I see more and more
non-realtime systems being implemented in interpretted (non-compiled)
languages - because the feature loss associated with "as fast as
possible execution speed" is too costly.
I'd like to think that just like everyone except a few Fortran
programmers have realized the advantages of a run-time stack (despite
the overhead associated with it) so too, eventually everyone will
realize the advantages of things such as always doing run-time method
resolution, garbage collection, and so on. But that's just my opinion.
This is relevant because this is precisely the kind of opposition that
I get when I suggest "hey, let's add another field for that GC case to
the object" and they look and me and stare and say "do you realize
that a decorator or a 'point' class or any other class with less than
X number of fields is going to have such a huge overhead as a result?"
and I say "yes, but the overhead is going to be worth it in the long
run" and they shake their heads and say no and go back to doing other
things, like debugging their memory leaks, or rebooting their machine
that crashed due to a memory leak in the OS.
> To the good features of reference counting let me add finalizers. It is
> generally agreed by garbage collection experts that collectors shouldn't
> run user code. We prefer various methods of sending death notices where
> objects need to notify the program of their demise.
Indeed, reference counting lets us re-unify the (now rather
disparate) notions of finalizers and destructors as a
"zero-reference-count event handler".
> Finalizers must be fill five requirements. Sure, objects built get
> finalized. Safe, you can't violate type salty via finalizers. General, any
> code may be in finalizers including exceptions. Prompt, running finalizers
> is not indefinitely delayed. Ordered, finalizers happen in some sort of
> predictable order.
>
> Finalizers run by collectors fail every requirement but safe and sometimes
> fail that. Finalizers run by reference count can fill all the requirements
> though they don't fill ordered very well. In order to fill the requirements
> they may not be part of cyclic data structures.
It's not difficult to do the ordered as well, as long as the
programmer actually has control over the order of the fields in the
object. Slight bit of run-time overhead to ensure easily predictable
ordering. But we do still have a "resurrection" possibility with these
that can cause some problems depending on the semantics of
"finalization/destruction".
> This suggests languages should be able to declare reference counted objects
> with finalizers. They should also have a way of checking that some objects
> are not part of cyclic data structures. If a reference counted object does
> get collected the collector shouldn't run its finalizer but the user can
> provide for a death notice as a backstop. Perhaps there can be a debugging
> notice of the event.
Before anyone says "thread context" instead of death notice, note that
we can have a thread that blocks until there's a death notice to
handle, so it's easy to implement the "run in a thread context" on top
of death notices but not necessarily vice versus (especially if the
language and/or OS doesn't support threads).
> To the extent that systems can be reference counted they can avoid
> collection and have finalizers. Packaged classes such as collections can
> avoid cyclic structures. To some extent this can push back the limits on
> system size created by garbage collection.
This too, has it's limits - and as a result what I continue to hear is
"oh, since we'll have to GC anyway, why bother reference counting?".
On the Opposite side, there's the Obj-C community with the "since we
have to design with no cycles in mind, why bother with GC and why not
just do ref-count instead?"
The real question is what application:
1. Is going to have a large system size?
2. Is better off having it's large system size in it's own address
space and not in a database?
3. Can afford the additional "overhead" of reference counting?
4. Can't afford the performance penalities of GCing such a large
system?
4 implies a realtime system.
3 implies that performance is considered not so important.
1 implies a large system.
2 implies a large system that isn't well structured.
Maybe it's just me, but I see 4 and 3 at odds with each other and 1
and 2 at odds with each other.
The only thing that I can think would possibly meet all of those would
be an object-oriented database, but if it tried to be "high
performance" and wanted good benchmarks, the issue of (3) becomes a
delicate balance with (4), and in most cases they'll want to go do
concurrent collection anyway, I'd suspect.
I'm afraid that GC+RC = Platypus (nice in a zoo, but certainly not as
successful as a raccoon).
> Charles Fiterman
> 403 Burton Ave
> Highland Park, IL 60053
>
> The gods confound the man who first found out how to distinguish hours.
> Confound him too, who in this place set up a sundial,
> to cut and hack my days so wretchedly into small portions!
>
> Titus Maccius Plautus 254-184 B.C.
--
Andrew Chen
chenandr@cse.msu.edu