[gclist] Re: gclist-digest V1 #68
Matthew Fuchs
matt@wdi.disney.com
Fri, 7 Jun 96 10:32:40 PDT
Marc:
>
> || Back-Searching to Collect Distributed Cyclic Garbage.
>
> Sorry to disappoint you, but I've seen this before, lots of times. Can't
> give you a reference though. You've added a number of nice features though.
>
I presented this at IWMM 95 last September, but I've mentioned it to
many people (including you) over the last couple of years. No one
ever mentioned having seen it before, so I may be the source of all these.
...
> || Distributed cycles are collected by finding zombies, and searching
> || transitively backwards through the objects which reference the zombie
> || looking for an object which is locally reachable. If there is no such
> || object, the zombie is unreachable. It and all its ancestors can be
> || reclaimed.
>
> Why search *backwards*? You get the same amount of information by going
> forwards: if by going forward you get back to where you started, and have
> encountered no locally-reachable objects, then the path you took MIGHT be a
> cycle of garbage.
>
> I say "might", because you must take into account: (i) all possible paths,
> and (ii) concurrent mutator activity.
>
You cannot get much liveness information by going forwards. You might
find out if you are on a cycle, but not whether you are alive or dead
(unless you trace through live nodes). Tracing backwards also lets
you know where to stop. As you once pointed out to me, forward
tracing algorithms scale with the size of the network. Backwards
tracing scales with the length of the reference chains, which is
certainly much less. It also allows you to judge the quality of
references to you, which is very important on an open network.
> Following pointers backwards can be done, but it costs orders of magnitude
> more costly than forwards.
>
Well, yes and no. Depends how ya do it, and how often (after all,
most acyclic garbage is collected in the "usual" way). For example if
you can propagate some information about how far a node is from a
persistent root (let's call this minimum distance, or MD), the node
can only be garbage when this estimate increases. The final approach
I presented at IWMM, which is not in the paper because I only put it
all together the last few days before the conference, combines the
inverse trace with the forward propagation of information in the
Maheshwar and Liskov paper (http://www.pmg.lcs.mit.edu/distance.ps)
(M&L) (I'd played with other ways to estimate this, but their's is
quite elegant). In M&L, after a local collection, the collector sends
MD estimates from local stubs to their scions. Anything reached from
local roots sends a 1, and other stubs send 1 + the minimum of
incoming scions. The point is, the MDs of objects on garbage cycles
increases without bound. In M&L, once the MDs exceed some value, they
send around their network locations, and then eventually all jump to
the same network node with the lowest ID. If they've caught a garbage
cycle, it'll go with the next local collection.
Unfortunately, we can't assume objects can migrate, for a variety of
security and other reasons. However we can still use the MDs to
determine when to start an inverse trace, and we can use information
from the forward propagation to build the inverse reference graph with
almost no additional overhead (we just need to keep track of the
originating scions as we trace through to the stubs to give them their
MD estimates). To be brief, an object only starts an inverse trace
when its MD exceeds some trigger value. If it is not collected,
everyone involved in the trace now knows what the trigger MD was and
can increment their barrier values appropriately (say 2x the old
value). This can be seen as a kind of "distributed generational
collection", and controls the impact of the algorithm in number of
messages.
I should also point out that the algorithm I presented in my paper
went beyond just a depth-first search to mark each node touched as
either alive or garbage, unlike a pure DFS which only proves if the
initiating node is alive or dead. That also cuts down the number of
times the algorithm is run, and provides lots of interesting
information that could be used for other heuristics.
Matthew Fuchs
matt@wdi.disney.com
http://galt.cs.nyu.edu/students/fuchs