[gclist] Memory cheats and VM (was Counting on one thumb)
Richard A. O'Keefe
ok@cs.rmit.edu.au
Fri, 17 Jan 1997 11:41:57 +1100 (EST)
I mentioned the old SETL trick of keeping a reference count to an
object and making a copy whenever you tried to mutate an object with
reference count > 1.
John Carter wrote
I find it hard to believe that this would be more efficient that (sic)
brute force copying!
This is an interesting fact about you. Why do you find it hard to believe?
Suppose we keep reference counts off by one, so that
-1 => unused
0 => unique reference
>0 => shared
Lazy copy is then
ld R2, refcount(R1)
beq R2, L1 {predict taken}
<< code to copy R1 >>
L1:
In a language with latent (=dynamic,Lisp-like) types, this can be folded in
with the type test. The point is that you almost always _don't_ make the
copy. _I_ find it hard to believe that brute force copying would usually
be cheaper than 2 instructions.
Or is this a relict (sic) of memory short days like ye olde
Vax BLISS?
I have used BLISS-10 on a DEC-10, and I've seen BLISS-32.
They were even lower level than C. You can use BLISS-32 on a VAX
with hundreds of megabytes of memory. I don't see any connection here.
For what it's worth, I believe SETL was originally implemented on
CDC 6600s or something of the sort.
Or is my intuition here out?
Your intuition certainly clashes violently with mine.
Or did this just apply to
"big" objects and little ones like int&double were copied immediately?
Come off it!
There is *never* any need to copy *immutable* 'objects' of any kind.
Integers and floating point numbers are intrinsically immutable.
Copying (and copy optimisation) only apply to *mutable* objects,
where there is some operation that can change some *part* of an object.
"Scalar" values like Booleans, numbers, characters, &c never need copying
because they are not mutated, and this applies no matter _what_ their
size. It doesn't matter whether these values are unboxed or boxed.
Lisp bignums and ratios don't need to be copied either.
> I believe that the current
> language SETL2 still gets the same effect; by what means I have no idea.
I was tossing ideas about of looking at the dataflow graph and only
actually letting the compiler put in the copy if possibly needed. But
I regarded this as a "finishing touch" rather than a core
implementation issue. One could also use COW. In fact that was an idea
I have been contemplating but I don't know if it is practical.
But copy-on-write says "copy this object whenever it is modified, even if
there is only one reference to it". This obviously does _more_ work than
the reference counted method.
How expensive is it to play with VM?
It depends on the operating system, and it depends whether you are
measuring _performance_ costs or _development_ costs.
Could one implement every aggregate object as a separate VM
segment on a i*86? Anyone with any experience on that?
The trouble is that an 80*86 hasn't got very _many_ segments.
Even on the high end machines, segment registers are 16 bits,
and typically user software can use half of the segment numbers.
Since you'll probably need a code, stack, and data segment to get off
the ground, and it's handy to have a spare, this means programming
with a limit of 32764 objects. That seems rather small.
(Yip, I know its stretching the design paradigm, but it would be
very useful. All that bounds checking, indirection, etc etc all
done in hardware and all being done every time in any case.
Just think, no need to fiddle your users pointers after GC, just
muck with the segment descriptors!)
Just think about being stuck on one machine, not being able to use an
Alpha or a PowerPC or an R10000 or ...
Henry Baker
[mentioned hash consing].
I don't quite see how hash consing helps with mutable objects.