LLL stack questions
Mike Prince
mprince@crl.com
Tue, 20 Dec 1994 17:28:50 -0800 (PST)
On Tue, 20 Dec 1994, Chris Harris wrote:
> On Tue, 20 Dec 1994, Mike Prince wrote:
>
> > I say only have one active "data" stack, but also have access to many
> > data blocks. At any time you can make any block your stack.
>
> I'm assuming that you're doing this for efficiency reasons, am I right?
On the nose. I played with programming it with every block being a
stack, but found overhead was too high.
> It doesn't seem that much faster, though, and it could reduce the power
> of the system by a lot. For a simple example, how could you pop an item
> off one data stack and push in onto the other?
Change your active stack to the latter, and peek from the first (now a
data block). But that's not a good solution either. What we really need
to ask it what kind of operations are we doing. I am not a
code wizard/wheel/guru with an answer to that question. We need to
either locate some research (I know it's out there) based on code
instruction profiling, or experiment a little to come up with the best mix.
The capabilites you give affects how their used. If we made every data
structure usable as a stack, then they probably would be a lot.
> Sounds like in your
> system, you'd have to use the agent's stacks as a go-between, which would
> slow it down.
It's true we couldn't make some optimizations like using registers to
pass arguments, but the only other widely accepted, fast, dynamic way
(here come the flames, I can see em now!) is to use the stack. C does
this and is reasonably fast. Again, please correct me if there is a
better way which provides the versatility of a stack.
> So then blocks are accessed by default as linear chunks of memory. COuld
> work, but if you're going to bother, why not allow operations (ADD, etc.)
> work anywhere within a block?
It would make our instruction set much larger and more complex. See RISC
for an example, substituting stacks for registers.
> > > Are all elements in a stack the same size, or do the instructions
> > > specify what size their operands are?
> > > The latter would be harder to impliment and possibly to program for,
> > > but it might be more powerful. (keyword: might)
> >
> > Elements on the stack only occupy as much space as needed (chars get 8
> > bits, floats 64, etc) I think it will be marginally harder to
> > implement. Harder to program for at the low-level, but most of the time
> > we'll be buffered by a medium or high-level language, and to them it
> > doesn't matter. I do think it would be more powerful.
>
> And are you sure this won't have any performance impact? You're our main
> performance, guy, right? =)
I'm for performance, but I'm not saying I have all the answers. Anyways,
the design can be done just as easily with a fixed size stack if we find
out multi-types exact too much of a preformance penalty. :(
> > > Do we allow access to a stack only by pushing and popping, or do we
> > > permit accessing them at any point needed?
> >
> > Absolutely for both ways.
>
> I was just thinking about how much of a pain it is to represent certain
> structures (like ques) as stacks. Might we then want to extend our
> definition of stack to be a hybrid, that allows pushing and popping from
> both ends? It couldn't be that much harder to support, and it wouldn't
> be any slower.... (Guess it could make some strange syntax, though.)
It would add another pointer, which is relatively minor. The big thing
we're trying to avoid is having a large base instruction set.
> > > Should we allow recursive stacks? (Should we allow stacks to enclose
> > I'm against this because I don't see an efficient way of managing our
> What's the difference between implimenting it in the HLL or the LLL?
> It'll be equally slow either way, will it not? Its not really that
> complex, either. You just write more generalized stack-management code,
> so that it can be applied at any level. The main problem I can see is
> how to make an intelligent syntax for it.
If it's in the HLL, the kernel and LLL do not inherit the "baggage" for
supporting this if it goes out of fashion. The problem I see is quickly
resolving memory references. If you have to walk a tree, that'll not only
slow you down but may cause the CPU to snoop all over memory crushing
your data cache. In the back of my mind I'm trying to keep as much data
together as possible (it's be kind to Cache's month!)
> BTW, does the LLL get compiled into a binary format, or is it left as
> text? I would assume the former, but I'm just curious.
Absobinarilyso!
> > > While this might be more difficult to support than single-level
> > > stacks, it might make things easier and/or more efficient in the long
> > > run. Recursive stacks would allow us to break a problem down in a
> > > fairly logical way, without having quite as much overhead on the next
> > > level up. (If I have a data structure that is represented as an enclosed
> > > stack, the enclosing stack can look at it as one unit, rather than a
> > > number of seperate elements.)
> >
> > I'm scared of having to parse through a stack hierarchy to resolve simple
> > memory references.
>
> Good concern, although this doesn't necesarily mean slow, if we did it
> right. For example, in your cell idea, you could load a sub-stack as
> your active stack, and you'd then be able to access it with the same
> blazing speed you could access a first-level stack.
Whoops! I didn't read far enough ahead (sorry). Yes, for changing your
active stack walking a tree is no biggy. But if our code is jumping
around in data memory, then we're in trouble. Please don't read me as
being dead against this idea. Actually I'm trying to write my code to
support it (just in case). I, like others, are fishing for the best
ideas :)
> Okay, but what if the cells are not fine-grained enough? Some problems
> can only be broken in fairly large pieces, in which case VM or some sort
> may be required. Having to call other cell just because it won't fit in
> memory as one is kinda' silly. (Especially if there's a function taking
> up 90% of the space that gets called only every few hours.)
I'm hoping cells code will typically be under a few K. Data might
baloon, but by factoring we could control it. The nice thing is
evolution. Programming styles that work would be used (i.e. SoftCo
writes well factored code and breaks data into reasonable sizes managed
by cells, their software runs fast, MicroPros write huge apps that are
memory hungry, their software crawls along. Who'll stay in business
longer?) I hope I'm not missing the point on some problems which must
have large pieces and CANNOT be factored, am I?
> How are we going to manage really fine-grained cells anyway? Let's say
> I've broken my problem up into 1,000 nice, well-defined pieces. Its not
> going to be efficient for the OS to manage them all, if 90% only interact
> with themselves, so there needs to be some sort of sub-cell/meta-cell
> idea. You can then define the public interface as a single cell, which
> encloses all the pieces. Let me guess; you disagree? =)
You know me too well. But you are right. In my version each cell has a
list of links it regularly uses. By using this kernel level information
the OS can make informed decisions. One cell may have the responsibility
of representing a "group", but the only thing that makes it special from
the OS level is that it's assigned the "name" associated with that group.
> > > Is code just a special type of stack, and thus agents can play with
> > > the code they're able to execute?
> >
> > Absolutely not. Though they can build up some LLL code, and ask the
> > kernel/linker to attempt to integrate it.
>
> Any particular reason? Is there not enough security features at the
> low-level to prevent any damage this could cause? If not, then we should
> reconsider. Security by simply making it harder to get at doesn't work.
There are a couple of reasons. First of all any code in any system
maintains links/pointers/ etc to other pieces. If you let someone tamper
with them, then you risk invalidating them.
We can ease the security overhead by eliminating the differentiation
between user and kernel code. Make it all kernel. If we wrote the code
ourselves (read final-compile) then we know it's safe. We can also
inline in the code hints about how to link or return information which
will optimize our execution times.
Traditional security creates fences around your code and data to prevent
you tampering where you don't belong. Our new security (I think Fare's
thinking along the same lines) is to have safe code. You can then run it
on a CPU that does not have the necessary hardware, or a CPU maker could
remove that hardware to make the chip cheaper, smaller, faster, etc.
> > > I'm also not clear as to how an agent/toolbox/program can communicate
> > > with the outside world. Is there a language primitive to communicate
> > > with external objects, or is this accomplished only through shared stacks?
> >
> > If you mean outside world as another cell on another computer, all you do
> > is put some data in either the agent stack or one of it's data blocks and
> > "call" the distant cell by name. Your agent is transported to that cell
> > and begins executing it's method. That method now has the agents stack
> > and data blocks within it's scope. It might pull some data from the
> > stack, steal some memory blocks from the index, and push some results,
> > then issue a return. Now the agent is back where it started, in your
> > local cell.
>
> Okay, that makes sence. Doesn't sound to secure, but....
The reason it's secure is that the reciever unpacks the data, the agent
can't do anything on it's own. The reciever can invoke any level of
security it wants (simple key, complete decryption of agents data, etc).
Mike