Could we use multiple stacks?
Eric W. Biederman
Thu, 16 May 1996 06:54:26 -0500
> As part of that I have implemented a rough approximation of my ideas.
> I've written enough code now that it mostly works. More then has gone
> before but I still think not quite enough to use as the lll.
Well, please do publish whatever you're doing:
it may be a good starting point for us all.
About Stacks Building up from the Forth Model
(And my ideas Fare seems to have been confused,
and I know I have change my position since I
first posted. )
ANS forth has 4 stacks
The Return Stack
The Data Stack
The Floating Point Stack
The Dictionary Data Stack ( you use allot to allocate or deallocate from it )
I suggest we add a
Root Pointer Stack
The argument for having a floating point stack is that there is a
whole class of programs that do intensive floating point operations.
Windowing systems and GUI's seem to be in among those. Additionally
floating point registers are a very real part of practically all
machine architecures, and it is simpler to generate good floating
point code if you don't commonly put your numbers on the data stack,
as that looses a lot of type information.
With regards to Forths Dictionary Data Stack we will probably replace
that with a garbage collected heap, but how it could be implemented in
an ANS compatibility layer is an interesting question. It is my
intention for the moment to see how close to ANS forth we can be and
still have a garbage collector.
The seperation of the return and data stacks actually is a plus in
several ways, the big one is that you aren't always tripping over you
return addresses when you want to use the stack. That and there is a
great deal more symmetry between word arguments and return values.
With regards to my suggeseted root pointer stack. Here is how I think
it will be useful. First a garabage collector has to have a set of
root pointer to start it's collection with. Having a stack reserved
for this purpose seems to be an almost ideal way to manipulate and
implement that aspect of the garbage collector. In addition there are
some applications that heavily manipulate anonymous objects and have
pointer sprawl all over the place. Additionally it could be that any
single pointer is the only pointer to an object, in these pointer
intense applications, so passing those pointers on the root pointer
stack would be ideal.
To conclude, I don't think it's really useful to multiply stacks.
I think that one or two stacks
plus a fast and efficient general-purpose heap,
is more than enough.
The space at the end of the heap could be used as a scratch stack, too,
which may or not account as one of the above-mentionned stacks.
Surely, the heap could be multiplexed according to the size/type
of the reserved data, but the user shouldn't rely on it.
Well in considering how to implement continuations on top of a forth
system with multiple stacks (data & return), and not have to copy the
stack and then restore it. I figured out how to implement frames and
spaghetti stacks (more or less) for a forth system. The follow is the
First abandon the idea as the function frame as the basic unit used
for continuations. Instead introduce the idea of stack continuations.
A stack continuations is a saved place upon a stack subject to all of
the usual restrictions of continuations, but instead implemented for
that stack. A normal scheme continuation would be the collection of
the continuations of all the stacks a function uses. A stack frame is
then generated whenever a continuation is generated.
All pushes are at the top of the stack, but if you happen to be in a
frame that isn't at the top of the stack your pops work find but when
you next push you generate a new frame. Carried to an extreme all
stacks could be simply various continuations at different states of
the same original stack. A particular advantage to this kind of
arrangment is that garabage collecting stack frames is realivily
But I won't saction it for now until I get a decent compiler
built instead of the thing I've got now that is correct, but would
make a pessimiser jealous in the quality of the code it generates.
The other thing I am considering and will help to some exten is a
calling convention that specifically buffers stacks, and global
variables in registers. When compiling the compiler simply looks at
the description of the function to see which variables need to be in
which registers before the function is called. This has the potential
to make very good use of the registers. Especially if someone ever
writes a global optimizer. Currently I support it, but I my compiler
can't generate that expects things in registers.
Also, the return stack, if separated from the heap scratchpad,
and unless specifically allocated for the current thread,
could be limited to a fixed number of pages:
any kind of deep recursion should be done accross real-time checkpoints,
and hence must regularly allow GC.
Copying stack (the usual method for having light-weight continuations)
is too expensive for such deep recursion,
and heap-based recursion will be much cheaper
(remember: using the GC'ed heap is cheap),
while specifically reserving non-shared space for the stack
would be a good idea if the stack is going to be very deep.
Checkpoints are a good idea.
To me stacks are useful for two things:
visible user interaction,
and caching of temporary low-level data.
In between, high-level programming is better done with
an expressive statically strongly typed language,
where the low-level stacks are invisible,
and high-level stacks are not particularly efficient.
Of course, let's not be stopped by our possible disagreements:
let's try out everyone's ideas until some of them
(not all forcibly from the same person) prove better than the others...
> Besides, it really doesn't help with typing things in memory. If we can
> get this to be done the same way as on the stack, we gain an enormous
> Right the object system ( What provides the types ), and how it is
> implemented is what is important. What we really need to provide is a
> reflexive (how objects are implemented should be totally up to the
> user) interface to the garbage collector.
Surely, the HLL needs to keep track of object types;
but that the LLL should remember more than is strictly necessary for GC
is doubtful to me: compiled code should be efficient,
and whenever dynamic reflection is used,
interpretation and/or decompilation (for heavily used routines
-- imposes restrictions on compilation of decompilable routines)
is generally quite an acceptable overhead.
I was thinking that there should be some kind of interface like.
GC says to an object: Give me your pointers
GC says to an object: These are you new pointer values
GC says move to place x: Object moves
GC asks what kind of things do thes pointer point to:
> I strongly suspece that Garbage collection is not the right solution
> for every memory management task.
Indeed, GC is NOT the right solution for EVERY mm task,
and there should be ways for the LLL to work without it.
*Static* memory management (again, done by the compiler
with more or less human help) is often a good/necessary solution
to many problems, *when possible*.
(particularly when there's hard real-time/real-space requirement).
Actually, real-time requirements can be met better with a garbage
collector then with user memory management (unless it has a trivially
simple usage pattern.) The reason is a real time collector is
completely bounded in the time it takes. While normal memory
management along with generic reference counting is susceptible to
However, a GC is just needed anyway for intrinsically *dynamic* problems,
so we must implement it, and offer support for it inside the (extensible) LLL.
A small note:
Forth is technically more extensible then LISP.
There is a reason you never heard of a Forth macro.
If I push Forth too much it's only that in doing implementation work
and experiments I'm quick on my way to memorizing the standard and
every advantage and disadvantage in the whole setup.
(The world is going fuzzy, someone needs to GC all of those fuzzy