[gclist] Fast-allocating non-copying GC's

Geert Bosch geert@sun3.iaf.nl
Wed, 20 Mar 96 00:36:28 GMT


On Tue, 19 Mar 1996 10:58:18 PST Hans Boehm wrote:

Me: "[You can avoid black listing and stack cleaning by creating 
Me:  'pointer-free' storage pools]"

You: I think we're talking about different things here.  

I didn't read very well, I'm sorry. What you were suggesting is 
blacklisting the destination of pointer-like values which are not
(yet) valid pointers. I'd only be able to implement this with
page-granularity. Hopefully that will work allright.

What I was thinking of is 'blacklisting' regions of memory that
are known not to contain pointers. When the allocator is 
initalized and no memory has been allocated (apart from statically
allocated memory), I'd put all memory behind a write-barrier and
mark it as pointer-free. As soon as the memory is written to, the
memory block will be put in a conservatively scanned pool. 

>See ftp://parcftp.xerox.com/pub/gc/papers/pldi93.ps.Z for details.

I did download most of that directory today and was busy reading
while this mail arrived. ;-)

>1) It's expensive to release and remap a page.  
Actually, I merely flag it as discardable. I'm currently implement
the allocator on an OS/2 system, because it has a mature 
multi-threading and VM implementation and the API to control
memory management is rather complete and easy to use. Most Unices
are lacking on this point and I can confirm your negative experiences
with the not so heavily excercised VM mechanisms.

Fortunately, finally some systems (Solaris for example) are being
improved very fast in this area. Backward compatibility with older
Unices is very, very low on my priority list. This is an educational assignment for me, which is different from implementing a production
system.

[About concurrent GC using the 'snapshot' approach]
>That's probably not the easiest algorithm to implement.  

The main reason I liked the approach is for conceptual simplicity.
In fact I have thought of doing a first implementation, by forking,
coredumping and analyzing the dump using an external program that
interfaces to gdb. ;-) (Actually, I like that idea.)

>You seem to assume that incremental collection is only 
>important in multithreaded environments.  I view those 
>as almost completely orthogonal.

Yes, you're right. (Of course ;-)