[gclist] Finalizer flame wars.
Robert A Duff
bobduff@world.std.com
Fri, 10 May 1996 09:42:07 -0400
cef@geode.geodesic.com (Charles Fiterman) says:
> If a finalizer does not effect the program it can safely be put in
> #if 0 ... #endif. Why have it?
I think it makes sense to separate "logical correctness" from "resource
usage". A program is logically correct if it *would* be correct,
assuming it had an infinite supply of resources. You can define what
things are resources as convenient -- certainly time and memory, but
perhaps also slots in a fixed size table (which is essentially what the
file handle resource is, for example). Both are equally important, but
can be analyzed separately.
It makes sense to say that a finalizer should not affect the logical
properties of the program, but may affect its resource usage (memory
used, speed, etc).
In fact, this is exactly what a gc does -- it doesn't affect the logical
properties of the program, but (we hope) makes it able to run in a
finite amount of memory, and (perhaps) makes it run faster.
I must admit, however, that when the resource in question is fairly
limited or fairly expensive (like the file handle case), using a
finalizer that gets run who-knows-when is a questionable practise. We
might have millions or billions of memory cells, but we probably only
have hundreds or thousands of file handles.
> If they are truely non effecting #if 0 ... #endif is the most efficient
> and appropriate construct.
Not true. It's quite possible for the "..." above to do something that
is completely invisible, except that it speeds up the program. For
example, rearranging things so that locality of reference is increased.
> > > This is original sin.
> >
> > I'd call it "bad programming practice".
>
> I prefer colorful language where appropriate.
:-)
> The problem with all this stuff [allowing user-defined hooks into the
> gc] is that if buggy the problem appears far from the source.
Yes, of course. A bug in the gc itself can cause all kinds of nasty
problems. Clearly, if you allow user-defined hooks, those are
logically part of the gc, and can cause equally nasty problems.
Is the application programmer less trustworthy than the person writing
the gc?
It seems to me a reasonable thing to allow the application programmer to
modify gc behavior via some sort of hooks. The important thing is that
they be designed in such a way that the "dangerous" code can be isolated
from the rest, and made much smaller than the rest.
The alternative is the paternalistic attitude that says only the
language implementer can play dangerous games. That's fine when it
works, but in *some* cases, it will cause people not to use gc at all.
In any case, almost all programming languages allow *some* way of
affecting gc behavior, possibly causing bugs. You can save a pointer on
the disk, or somewhere else on the internet, thus hiding it from the gc.
We don't know how to garbage collect the internet, and probably never
will.
>...For
> example suppose a user defined mark method failed to mark an object
> that was actually in use. You now have a wild pointer bug. The problem
> with your semantically non-effecting cleanup is that a bug could make
> it semantically effecting.
Right. The application programmer needs to understand that, and take
appropriate steps (treat this as a dangerous thing, to be isolated, and
perhaps inspected more carefully).
> So as a language designer I opt for transparent. The user should be
> unaware of the collector. I am considering the option of saying the
> language protects the user from himself unless he says I know what I'm
> doing leave me alone. For example assignments are rigidly type checked
> unless the user says something like "shove". But the execptions where
> the user tells the language to bug off should be very rare.
They should be rare, and should be isolated.
>... In C++
> type casts occur almost every line.
Good example. In Ada, you can do the same sort of dangerous casts as in
C++. However, the "dangerous" casts use a different syntax from the
"safe" ones, so a person reading the code can tell what's going on.
IMHO, this is a better solution than simply forbidding evil.
- Bob