LLL stack questions
Chris Harris
chharris@u.washington.edu
Tue, 20 Dec 1994 10:02:41 -0800 (PST)
On Tue, 20 Dec 1994, Mike Prince wrote:
[Mike's new Cell scheme deleted]
> On Mon, 19 Dec 1994, Chris Harris wrote:
>
> > Do we support a fixed number of stacks (as in Forth), or as many
> > as the programmer wishes?
> > I'd tend to go with as many stacks as are necesary. This raises the
> > question of how they are identified (number and/or name?), but this isn't
> > too difficult of a question.
>
> I say only have one active "data" stack, but also have access to many
> data blocks. At any time you can make any block your stack.
I'm assuming that you're doing this for efficiency reasons, am I right?
It doesn't seem that much faster, though, and it could reduce the power
of the system by a lot. For a simple example, how could you pop an item
off one data stack and push in onto the other? Sounds like in your
system, you'd have to use the agent's stacks as a go-between, which would
slow it down.
> I'm for grouping code and data into units called Cells. Each cell has an
> index (see above). To access a data block you simply provide its offset
> into the index (an element number).
So then blocks are accessed by default as linear chunks of memory. COuld
work, but if you're going to bother, why not allow operations (ADD, etc.)
work anywhere within a block?
> To make it more user friendly just have your favorite compiler alias the
> number with a name (they all boil down to numbers anyway, right?) Or
> maybe your compiler will manage them enough that you never see them as
> parts of an index.
Sounds good....
> > Are all elements in a stack the same size, or do the instructions
> > specify what size their operands are?
> > The latter would be harder to impliment and possibly to program for,
> > but it might be more powerful. (keyword: might)
>
> Elements on the stack only occupy as much space as needed (chars get 8
> bits, floats 64, etc) I think it will be marginally harder to
> implement. Harder to program for at the low-level, but most of the time
> we'll be buffered by a medium or high-level language, and to them it
> doesn't matter. I do think it would be more powerful.
And are you sure this won't have any performance impact? You're our main
performance, guy, right? =)
> > Do we allow access to a stack only by pushing and popping, or do we
> > permit accessing them at any point needed?
>
> Absolutely for both ways.
I was just thinking about how much of a pain it is to represent certain
structures (like ques) as stacks. Might we then want to extend our
definition of stack to be a hybrid, that allows pushing and popping from
both ends? It couldn't be that much harder to support, and it wouldn't
be any slower.... (Guess it could make some strange syntax, though.)
> > Do the primitive operations (add, subtract, swap, etc.) require their
> > operands to be on or near the top of a stack (like Forth), or can they
> > access anywhere inside the stack?
>
> I'd say they have to be at the top of the stack, addressing is implicit.
Implicit address = you know where the operands are without any silly
adding to a base, right? Sounds decent, but it might require a lot of
swaps to get there. I guess I'm not thinking in a very stack mind-set,
though. Have to download forth when my new computer gets here....
> > Should we allow recursive stacks? (Should we allow stacks to enclose
> > stacks to enclose stacks to enclose stacks, etc.?)
>
> I'm against this because I don't see an efficient way of managing our
> resources while allowing this sort of complexity (please don't flame me
> for this, a little thing like recusrion can create a big set of
> headaches, just tell me how you'd solve it =)
>
> NOTE: I'm not against a HLL compiler building those out of our
> primitives, just that we wouldn't support them in the LLL.
What's the difference between implimenting it in the HLL or the LLL?
It'll be equally slow either way, will it not? Its not really that
complex, either. You just write more generalized stack-management code,
so that it can be applied at any level. The main problem I can see is
how to make an intelligent syntax for it.
BTW, does the LLL get compiled into a binary format, or is it left as
text? I would assume the former, but I'm just curious.
> > While this might be more difficult to support than single-level
> > stacks, it might make things easier and/or more efficient in the long
> > run. Recursive stacks would allow us to break a problem down in a
> > fairly logical way, without having quite as much overhead on the next
> > level up. (If I have a data structure that is represented as an enclosed
> > stack, the enclosing stack can look at it as one unit, rather than a
> > number of seperate elements.)
>
> I'm scared of having to parse through a stack hierarchy to resolve simple
> memory references.
Good concern, although this doesn't necesarily mean slow, if we did it
right. For example, in your cell idea, you could load a sub-stack as
your active stack, and you'd then be able to access it with the same
blazing speed you could access a first-level stack.
> > Should our primitive language provide VM for stacks? (For example,
> > if a stack is larger than physical memory, the items near the bottom
> > could be paged to disk until required.)
>
> I'm against this one too. I have some blasphemous thoughts like dumping
> VM. If our cells are fine grained enough we could swap them out, instead
> of adding another layer to our memory management. We'd also be using
> just one algorithm for system optimization (least used stuff out!) and
> eliminating the necessity for file systems (view the hard disk as a slow big
> computer) or use a neighboring computer as a backing store.
Okay, but what if the cells are not fine-grained enough? Some problems
can only be broken in fairly large pieces, in which case VM or some sort
may be required. Having to call other cell just because it won't fit in
memory as one is kinda' silly. (Especially if there's a function taking
up 90% of the space that gets called only every few hours.)
How are we going to manage really fine-grained cells anyway? Let's say
I've broken my problem up into 1,000 nice, well-defined pieces. Its not
going to be efficient for the OS to manage them all, if 90% only interact
with themselves, so there needs to be some sort of sub-cell/meta-cell
idea. You can then define the public interface as a single cell, which
encloses all the pieces. Let me guess; you disagree? =)
> > Is code just a special type of stack, and thus agents can play with
> > the code they're able to execute?
>
> Absolutely not. Though they can build up some LLL code, and ask the
> kernel/linker to attempt to integrate it.
Any particular reason? Is there not enough security features at the
low-level to prevent any damage this could cause? If not, then we should
reconsider. Security by simply making it harder to get at doesn't work.
> > I'm also not clear as to how an agent/toolbox/program can communicate
> > with the outside world. Is there a language primitive to communicate
> > with external objects, or is this accomplished only through shared stacks?
>
> If you mean outside world as another cell on another computer, all you do
> is put some data in either the agent stack or one of it's data blocks and
> "call" the distant cell by name. Your agent is transported to that cell
> and begins executing it's method. That method now has the agents stack
> and data blocks within it's scope. It might pull some data from the
> stack, steal some memory blocks from the index, and push some results,
> then issue a return. Now the agent is back where it started, in your
> local cell.
Okay, that makes sence. Doesn't sound to secure, but....
>
> >
> > Any thoughts are welcome and appriciated....
>
> Likewise.
>
> I hope I don't sound too stuck in my ways. I look forward to hearing
> other opinions on the above.
>
> Mike
Nah...You just sound like you want to create the fastest OS ever in
existance. =)
-Chris