Languages

Mike Prince mprince@crl.com
Wed, 14 Dec 1994 15:09:05 -0800 (PST)


On Tue, 13 Dec 1994, Chris Harris wrote:
> On Wed, 14 Dec 1994, Francois-Rene Rideau wrote:
> 
> > > The primitives are 
> > > implemented by the kernel, and should be with us forever?(!).  Everything 
> > > else builds on these. 
> <snip>
> > portable high-level system. Nothing should prevent us to eventually redesign
> > the LLL (i.e. the set of primitives) and still use "our system" so that a
> > simple recompile will have the same applications running !
> 
>     I'll agree in theory, although I'm not sure what you have left in 
> "our system" if we remove the primitives....

Bingo, Nothing!  You must have primitives for a migratable design.  I 
hope people aren't confusing portability with run-time migratability.

OSF and Taligent, are among many who want to design portable software and 
standards.  Take your code, COMPILE it without modification to run on a 
number of different systems.  During compilation the low-level stuff is 
instantiated/created to support the higher level definition (code) of the 
application.  But you still have multiple versions of the code for 
multiple platforms.

That is not what I want to do.  I want all code/objects/whatever to be 
translated into a low-level/medium level/whathaveyou format that is 
intelligible by every system.  If we say the LLL can be upgraded/modified 
etc then we are falling into the trap I'm trying to get out of, backwards 
and forwards compatibility of software.

The LLL should be fixed, period.

If you don't like the speed, do an exhaustive optimization just before 
run time of the LLL code on the machine you're executing on.  You'll get 
the native binary your looking for.  And you only need one compiler;
 LLL -> native.

But if you start having many versions of our LLL language we have done 
little to advance a unified coding/distribution standard.

I don't want my computer saying "Sorry, I don't know how to use X".

>     Where do objects fit in with our grand design, anyway?  Toolboxes are 
> neat, but they aren't objects.  Where does the high-level OOP stuff come 
> in, or is it just an illusion maintained by the compilers?

There is no such thing as an OO computer.  Every computer runs machine 
language (please don't flame me for ignoring neural nets and such!).  Our 
LLL should offer everything it can to facilitate the easy implementation 
of code generated by OO compilers.  But it is going to be a balancing act 
of many different goals.  It will probably be somewhere in between 
machine/forth/objects.  It will not be pure OO, but it will be small, 
fast, and secure, and provide all the primitives we need to implement OO 
as well as other language trends.

> >    Why shouldn't I be able to assign "click_event_handler := my_click_handler",
> > or pass a function as a parameter to a generic sorting algorithm ? Have you
> > never seen Scheme, Lisp, FORTH, ML, Miranda, BETA, SELF ? Have you never heard
> > about functional programming ? I'll let jch (some friend of mine who's just
> > joined the list) expand on functional programming; but SELF or BETA
> > object-oriented programming seem highly isomorphic to functional programming
> > to me...
> 
>     Are you sure you want to do this in the LLL?  (That's what this 
> discussion's about, right folks?)  It sounds like you perhaps would like 
> to abolish the LLL altogether, and move right into the high-level stuff.  
> I will agree that SELF and BETA are great languages for high-level stuff, 
> but I can't figure out how you're going to impliment all this.

Exactly.  We need to seperate the low-level goals from the high-level 
goals.  I'm still getting the feeling the OO thing is trying too hard to 
permeate all levels.  A good design is one that works the best.  OO can 
create a clean interface between all the levels, but it may not yield the 
best overall design.

First we need to come up with a list of goals, then specifications, then 
implmentational strategy, etc.....  We couldn't do that.

So instead lets suggest ways of actually doing something.  Both myself 
and Johan have stated techniques for actually getting something running.

Everyone, please refrain from saying anything sucks until you have a better 
version, otherwise all our time is wasted.

The bottom line of every design is the CPU snooping around in memory 
executing machine instructions.  Please give me examples of our 
distribution code that could accomplish all our stated goals.  Saying 
that OO is cool and that the low-level stuff is implementation dependent 
or can be modified is skirting the issue.  Give me real world code and 
data structures!  Otherwise we are building a foundation in sand!

> > > Again, you're mixing up the library part with the kernel part.  One
> > > part of the kernel says to another, I need some mem, give me a block.
> > > That's pretty fundamental.
> >    You're confusing kernel and device drivers. Actual memory move is
> > let to some hardware interface -- a device driver. There should be no
> > such thing as a monolithic program that everyone calls. Just have
> > everyone do things directly, call directly the good other object it
> > needs, etc. Some object uses a (persistent) memory object (which in
> > turns uses a raw memory object); it also uses a sound object, and a
> > graphic device object (which calls the hardware memory object to access
> > video memory). All the calls are *inlined* by the compiler (which is
> > automatically activated at run-time if needed). No need to inline the
> > calls by hand in a particular system to obtain some static, monolithic,
> > unmaintainable, badly interfaced, program.
> 
>     Okay, I think this has been your best display of your idea of a 
> kernel-less system.  Some of these concepts seem a tad bit strange, 
> however.  You want a memory driver, yet you need to work with memory in 
> order to load/use it.  You want a CPU driver, yet it would need to use 
> the CPU to run it.  If you can explain how this would work, and the 
> overhead wouldn't be too high, this sounds like a good idea.  (BTW, what 
> kind of messages could you send to a CPU object?  Surely there would be 
> some primitives defined there, no?)

Please enlighten me as well...

>     Okay, so everything has to be modular.  But does this really mean we 
> can't have primitives?  Not constant primitives, but ones you can use?  
> (Gotta have something to compile the HLL into....)
> 
> > >> I completely agree there is heterogeneity; what I want is smooth interaction
> > >> and communication between different objects, and eventuality of a direct
> > >> access between any indirectly linked universes.
> > >
> > > That may not be possible.  Remember everything goes to machine code and 
> > > runs with the kernel.

> >    Everything will *dynamically* go to machine code, and the dynamics is
> > controlled by the HLL.
> 
>     So the tunes "world" revolves then around the HLL?  What if we wanted 
> a secondard HLL, that would be treated with equal respect as the "normal" 
> HLL?  Isn't it too non-modular to give a high-level construct control of 
> the whole system?

Yes.

>     Possible, although this would be dangerous if people starting making 
> too many calling conventions.  I guess you really need to explain what 
> your concept of an object is here, since in my mind right now, I can only 
> see a bunch of question marks shooting s'more question marks back and fourth.

Ditto.

> > >>>[about blocks]
> > > Separate the users from the progammers (see previous thread).  A block 
> > > will be a system primitive, users won't see it!  Maybe even HLL 
> > > programmers won't see it.  You can't use those arguments against a 
> > > low-level primitive.
> >    Yes I can. Blocks should be provided in some abstracted LLL library, not
> > as a standard primitive of the LLL kernel. The LLL kernel should be minimal;
> > you told it yourself. And the Scheme vs. Lisp or RISC vs CISC (and MISC vs
> > RISC ?) experience show that you should never multiply language axioms.
> > KISS. Blocks will be included in some system library (again, never worry
> > about efficiency -- human/computer optimizers can do it well for you later).
> > The library will present blocks as some quick and simple implementation of
> > arrays -- a point-modifiable version of functions from an index set to
> > another set.

A big chunk of memory belongs to my CPU, who manages it?  Lets say your 
HLL has an object which somehow has thwarted any security and come in 
control of it all.  My computer is humming along when my neighbor with new
version 5.7 of Tunes HLL wants to use my resources (i.e. memory) and has 
a newer better version of the HLL memory manager that his code relies on 
(the new version has enhanced functionality which is necessary).  What 
happens?  How does his memory manager interact with mine?  Does it take 
over?  Then EXACTLY how?  I need to hear the mechanisms.  You can't 
continue passing the buck onto a library, without explaining at least how 
objects interact.

My friend in Katmandu is using a modified version of 
Tunes (remember how wonderful OO is) and another guy "Down-Under"
is working with yet another modified version (of course not modified the 
same way).  They both link up on internet.  How do they communicate?
Again please give me exact data formats.  Because both versions of Tunes 
could have mutated, they both should at least be able have a common 
communications format.  Is that not a standard?

And if you say each must support a wide range of formats, just in case, 
then you are again skirting the interoperativity thing and migration.  My 
LLL should run on any computer, anywhere, anytime; No questions asked.
It's not aceptable to have the computer say "I can't understand Fred 
because I don't have the proper communications drivers"

>     These libraries (implimented as objects?) could then provide a sort 
> of compatibility across LLLs, couldn't they....  I'd once again like to 
> agree, but I'd again need more details about how the libraries are 
> implimented and how you impliment a LLL without such primitives....
> 
> > > Nah.  The LLL is what glues everything together.  I'm in Bangladesh (Year 
> > > 2005) using some leftover Pentium system running tunes v1.0.  I've got 
> > > all my little objects humming along together (remember everything boils 
> > > down to LLL) when I get an internet++ hookup.  All my little objects jump 
> > > for Joy as they realize their ability to migrate onto a nearby SparcStation
> > > 101010.  Alas, because we did not standardize on one LLL instruction set, 
> > > they are incompatable.
> > >
> > > Don't let this happen to you!
> >    We will standardize the LLL; only we say that, yes, this standard is
> > arbitrary,

Standards are not arbitrary, then you don't have a standard.

> > and, yes, we may upgrade anytime to a new, better standards
> > that we'll design before 2005; but programs using the older LLL may
> > still work after proper translation/interpretation,

You've just gotten yourself into the big forwards/backwards compatibility 
problem.

> > and we'll provide the
> > necessary modules for that (or if you're using some old processor which
> > had a native module for the old LLL, you can use it directly).

You must think smaller.  I have the feeling you're assuming 50 megs of 
RAM on every system.  Try designing for 64K.  Would your system still run?
I want to run across a wide range of platforms (From Cray's to embedded).

> 
>     I like it!  =)  I guess under this scheme, we no longer have definite 
> HLLs and LLLs.  It all becomes rather relative.  Would we allow for a 
> SHLL (Super-high level language) to be compiled into a HLL, which is then 
> compiled into a LLL, and finally converted to native code?  It could get 
> inefficient, but it should be supported....
> 
> >    As we already noticed, the vocabulary frontier is having us argue when
> > we agree. Who's maintaining the glossary project, already ? So our arguments
> > are settled once and for all...

I still have the text, make some suggestions and they'll be chiseled in 
stone!

> > Let the system be *open*.
> 
>     Applause....  =)

I'd love for it to be.  But please explain how to do that and still make 
processes migrate to a variety of platforms efficiently.  Also explain 
how to do that quickly onto an embedded system, or a massively parallel 
machine, etc.  I think this open thing is thwarting our attempts to make 
this thing work fast.  A system that crawls will never be used.  A system 
that needs tens of megs and fast cpu's will have a more limited market.

> > > Fat binaries encourage obesity.  I'm for a pure LLL distribution.
> >    And how will you distribute the latest hyper-fast bitblt routines ?
> > Or 3D game engine ? In LLL ? Let me laugh. Highly optimized code is needed

BitBlt is usually done by a engine, easily programmed by the LLL.  A 
3-D game engine is, you said it, an engine, being passed relatively high 
level commands by the LLL.  The engine would be bundled with the kernel; 
you turn on the system and its already there, appearing just like any 
other object.  The difference is the engine and drivers are in machine 
and are hidden.  I've heard this approach critiqued before, but again, 
give me a better solution.  FAT binaries with PGP signatures is not an 
option, if you let machine code float around between systems you are 
asking for big trouble.  When your good security (PGoodP) is not good enough!

Enough for today,

Mike (programmer by night)