[gclist] ref-counting performance cost

Andrew Shi-hwa Chen chenandr@cse.msu.edu
Fri, 08 Sep 2000 10:45:00 -0400


I tried repeatedly to understand what was going on here and I'm still 
a little fuzzy. I hope I'm not the only one.

I think what I am unclear on (and is of significant interest to me) 
is how those certain functions that check their arguments to see if 
they are garbage actually do so? Am I correct in my assumption that 
it is merely the checking of whether the reference counts are zero or 
not? Likewise, the only deref would be on named variable assignments 
or on function returns (where a scope disappears)?

If indeed that's where the only derefs would be, then we can optimize 
this futher by using the technique described in
 http://www.acm.org/pubs/citations/proceedings/pldi/349299/p264-cannar
ozzi/
for handling the stack frame pops (assuming this language doesn't 
have continuations or anything that prevents the stack from being 
linear) such that the only time an actual deref would occur would be 
when a side effect other than (initial) binding to argument occured 
(for example, set or explicit replcar or replcdr). In a "mostly 
functional" language like LISP this should be rare. Using that 
technique has the added advantage that it will (eventually) reclaim 
cycles (where eventually might be when the last stack frame pops).

Of course, I think that technique has all the same "negatives" as 
normal reference counting (without your optimizations) does 
(requiring updates on every heap pointer assignment, and so on), so 
utilizing it with your technique may have limited performance 
improvements (well, if unions are infrequent then there may be an 
advantage in that cache lines would not be dirtied quite as much, 
maybe).

> >     - A friend who implements tracing collectors asks why bother
> >       reclaiming garbage "all the time" by checking counts at
> >       every pointer store when one can just use a tracing collector
> >       and trigger it occasionally at allocation points only ......
> I never understood why there is a need in traditional RC to adjust the
> refcount on every pointer value change. So I don't, I always assume that a
> memory cell is garbage so the default state is refcount = 0. Whole trees are
> constructed like this until a side effect occurs (e.g. binding to argument).
> Then the ref counts are adjusted across the tree. So in my Lisp, (READ) can
> return entire trees with zero refcounts. This does mean that referencing is
> a recursive operation, however it only recurses on zero count nodes. This
> gives me complete freedom to manipulate pointers without reffing and
> dereffing everywhere.
> 
> As a further 'optimisation', functions which are guaranteed to return a tree
> containing every argument do no reffiing of their arguments. So for example
> CONS, LIST do NO reference increment of their arguments on entry nor deref
> on exit.
> 
> There are two other classifications of functions which have slightly
> different 'optimisations'. Functions which return a new value (e.g +) in a
> new cell or tree just get their arguments checked for garbage. The return is
> has a zero refcount. And functions which always return a subset of the cells
> referenced in their arguments (eg. CAR) get their arguments checked for
> garbage except the return value.{Quite a dubious 'optimisation'!}
> 
> Details aside, the point is that by deferring reference counting away from
> the physical act of assigning a pointer, the application (in this case a
> LISP interpreter) can actually _choose_when to mark cells to keep and
> _choose_ when to look for garbage. In general terms this is like a locally
> scoped mark-sweep. This opens the door to many potential short cuts.
> 
> I believe a compiler could analyse code and deduce that a function like
> (DEFUN FOO (X Y) (CONS X (LIST X Y (CONS X Y)))) requires no reference count
> adjustments at all to its arguments or return.
> 
> 
> 
> 
>