[gclist] CORBA C++ bindings and garbage collection
Ji-Yong D. Chung
virtualcyber@erols.com
Thu, 8 Mar 2001 00:20:27 -0500
Hi
> > [I asked] Does IDL to C++ language mapping rule out the use of garbage
> > collector in designing and implementing an ORB?
>
> [you replied] It's not the language mapping that rules out GC, it's the
programming
> model and the wire protocol. IIOP has no facility for tracking the
> number of clients talking to a server. In order for a client be able
> to talk to an object over the wire, it has to be explicitly exported
> on the server side.
Here, what you are saying is that the collector has no
easy way of knowing when there are no more live remote references
to servants, right?
> j> [I asked] If the mapping does rule it out, then wasn't it a mistake for
the
> j> original mapping designers to not to consider garbage collection?
>
> [you replied] I don't think so. Distributed garbage collection is a nice
idea in
> the abstract, but it comes with far too many problems to be something
> you really want in a commercial setting. Also, at the time CORBA was
> being agglutinated, there were no commercially-noticeable languages
> that supported GC in existence.
I was not thinking of applying DGC -- rather
I was thinking of using GC locally only, and treating (1) references
and (2) servants in special ways.
If an object is a servant, we can just save if from being gc'ed
and use a specialized threads to perform eviction (or whatever).
If an object is a reference to a remote object, then we simply look
check to see if the host of the reference's target object is in
the list of hosts that are reachable
(the list is pre-computed prior to GCing, in another thread)
and remove or finalize on these references.
This approach basically treats all local objects (other than
servants) as vanilla garbage collectible (including references to
remote objects).
The hardest problem (how do you GC a servant) is obviously not
solved by the preceding method. I was not thinking that GC would be
a way to fix that. The problem of not knowing when clients have
dropped references seems to be inherent problem to most distributed
systems (I am not sure if it is realistic to have a protocol that would keep
track of
all client connection on per servant basis).
I was just hoping though, that local GCs would be good enough
to simplify the semantics of memory allocation/deallocation
for C++ CORBA systems. For example, take a simple CORBA string. Even
managing this requires one to use string_dupe() and string_free().
If you deallocate a string that an ORB has given you, you can easily
get a core dump. If you could locally GC, then, you just receive
a "reference" to that object -- no string dupe, no string free.
With local GC, I was wondering, ll this local bookkeeping which come
with C++ CORBA might be eliminated. In Mitch Henning and Vinosky's
book, the authors devote large chapters to explain client- and server-side
C++ mappings
for memory management. All this seems just too complicated to me.
> DGC is still a fruitful source of
> research papers. This should scare you.
I am scared of doing DGC -- you bet.
> The last project I worked on that used DGC was BEA's WebLogic Server,
> a very profitable web application server. When I left BEA, we had
> been talking seriously for several months about turning off DGC
> altogether. The Enterprise JavaBeans programming model didn't require
> DGC, even though it was nominally implemented on top of RMI, and EJB
> has almost entirely displaced RMI as the distributed programming model
> of choice for large Java apps. The popularity of EJB made it very
> tempting to kill off all of our DGC infrastructure and its horrible
> Heisenbugs.
>
> Prior to WLS, I worked on Jini (remember that?), where we effectively
> handwaved away the intractable problems of DGC in large, semi-coherent
> systems by requiring that clients explicitly maintain leases to server
> objects.
What you say above make a lot of sense to me (unless I am totally
confused).
> There's no doubt that DGC makes programming seem nicer. Right up
> until it breaks irreproducibly in deployment or doesn't scale beyond a
> handful of participants, at which point you can take your app out back
> and shoot it.
Does Java RMI suffer from this problem? While I thought that
Java's RMI looked good, I also heard a little voice in my head saying "this
is
too good to be real." Given that many people have spent much energy over
years to tackle distributed computing problems, I wondered whether
it was realistic to believe that Java RMI simply made these problems vanish.
(I suppose I could have looked at Java source code ... but that is a huge
source and I was scared off by its size).
Ji-Yong D. Chung