[gclist] Re: PPC & GC, or GC and threads

Hans Boehm boehm@hoh.mti.sgi.com
Thu, 29 Jan 1998 15:17:10 -0800

On Jan 28,  1:47pm, Jan-Willem Maessen wrote:
> Subject: Re: [gclist] Re: PPC & GC, or GC and threads
> Dave Lloyd <Dave@occl-cam.demon.co.uk> writes:
> > The compiler tracks all references statically and ensures that
> > *somewhere* there is a root pointer for all objects being manipulated.
> > This root pointer may well be in an outer procedure frame - our calling
> > convention includes the rule that references passed to procedures must
> > be recorded in the ref map.
> If I understand this correctly, this means that you can retain dead
> data passed into a long-running procedure.  In an language like
> Fortran I would expect this to be a problem.  I'm assuming you have
> decided it isn't, or have taken steps to avoid it?
That's certainly a consideration.  But at some level that's an issue with
all collectors.  At best you'll be able to ignore those dead variables that
some imperfect compiler analysis identifies as such.  Eliot Moss may want to
comment on how much of a difference this makes.

Ben Zorn's and some of our own measurements suggests that for many malloc/free
C programs,  the maximal amount of "reachable" memory seen by a conservative
collector is very similar to that allocated at one time by the original

I have seen one case in which the reachable memory was forever growing, but the
manually managed version of the program kept memory use more or less bounded.
The dead-variable analysis required to fix that would have been well beyond
the capability of any compiler.  Most real programs seem to be somewhare in
between.  Eliot Moss may want to comment.
> On another note, more relevant to the discussion of multithreading, I
> note that the collectors under discussion seem (by and large) to
> assume that collection can happen safely at arbitrary times during
> execution.  Will collection need to be atomic, at least long enough to
> establish reachable register state?  If so, how is this atomicity
> established?  If not, why not?  (I fear I may have simply
> misunderstood a fundamental assumption here.  Perhaps my confusion is
> really a question about fundamental threading support, or a
> thinly-veiled complaint about the inadequacy of POSIX threads in the
> face of garbage collection.)
I share your complaint.  Our collector uses various hacks to stop threads
briefly.  I know of no way around that except by having the code itself check
for a pending GC.
> I am also curious when it becomes reasonable to expect some
> cooperation from the allocator itself.  For example, when the
> allocation rate is sufficiently high it is reasonable to suspend
> threads as they allocate, and then to invoke the collector when all
> threads are thus suspended.  Have tradeoffs such as this been well
> studied?  Coming from the functional languages end of the world (I
> wrote the parallel Haskell collector) we take a high allocation rate
> for granted; however, there is some concern that "really good"
> generated code may allocate infrequently enough to cause starvation.
> Solid numbers would certainly put my mind at rest.
I personally would argue that stopping threads only at allocation (and at
points at which they block voluntarily, I presume?) is insufficient for many
languages.  That means that a program with 2 compute intensive threads, one of
which allocates a lot, and the other of which never allocates will block the
allocating thread indefinitely.  This scenario is quite possible for C or
Fortran programs.  It's even possible for Java with carefully written code.


Hans-Juergen Boehm