Priorities

Tril dem@tunes.org
Mon, 4 Jan 1999 15:55:55 -0800 (PST)


On Sun, 3 Jan 1999, Anders Petersson wrote:

> >On Sat, 26 Dec 1998, Anders Petersson wrote (in unios@onelist.com):
> >
> >> I was also thinking of "user" as the *final users* of the system, not
> >> developers. I think these two groups are so different in their needs that
> >> they should be separated from each other. The users could retain their
> >> place, while developers get a higher rank.
> >
> >Here is my philosophy.  Final users are the most important.  They are the
> >people who the system should be most convenient for.  The needs of users
> >are so diverse that pre-packaged applications (written by someone else)
> >can't always work.  The best solution in every case is to get a program
> >written specifically for that user, for the problem they have right then.
> >It only makes sense, then, to have the system assist users in creating
> >programs to solve their problems.
> 
> It turns out that users are developers too.
> I have a somewhat differing view of how this will be made possible. Instead
> of compact application packages, programs are divided into smaller
> components, each one independent. The application has one way it uses the
> components by default, but anyone can use the components as building blocks
> or tools to accomplish whatever he wants. This can be seen as some sort of
> very high level programming, like all UNIX commands, but with a finer level
> of control and more possibilities than in UNIX.

Yeah, I agree.  The user builds a program (tool) specifically for the task
at hand by constructing it out of small tools.  Similar to UNIX, except
in our system this philosophy will be consistent system-wide, not just at
the shell.  A big difference is that the tool construction system IS 
the programming language, user interface, and the low-level system is
built with it, too.

> You're speaking about _language designers_. I'm not that. mOS limits itself
> to the "public" design of the system - the language used is not dealt with.

That's a major difference with UniOS and TUNES, since in tunes the system
IS the language.  The object system is used to implement any language
features or specific languages you need.  It seems like in UniOS, the
objects are written in a language, but the language itself is not composed
of objects.  Am I right?

> >* Try to figure out what to change in the source code to correspond with
> >the binary change you made. (you and the system would work together on
> >this, i.e. the system would try to figure it out but you would help if it
> >failed)  It may not be possible to express the change in the language used
> >in the source, if so, then one of the below options would have to be used. 
> 
> This is just too *unrealistic* to be commented.

Not if the source code was the "parent" (in mOS terms) of the binary, and
was "notified" of changes in the binary...

> >* Add a low-level "note" that the change you made should be used
> >instead of the regular binary code , next time that same code is generated
> >from the source.
> 
> The problem is that you can rely on that no sane person would edit binary
> code with a honest purpose.

To tweak it?  I think there are a lot of people who like messing with code
at an ASM level.  As Fare always says, "if we don't support it, someone
else will add it in the system, and make our system useless."  It's the
user who decides whether or not they need to mess with binary code.  If
they don't, the feature can be removed from their system.

> >* The change causes a "copy on write", or a duplicate of the binary that
> >no longer depends directly on the source. (it would still have a history
> >that it originally came from that source, but it would not be dependent in
> >the sense that it could be automatically invalidated and changed when the
> >source changes.  That is to preserve your change.)
> 
> Same here.
> 
> >* Or something else you can think of can happen.
> 
> My version is: shit happens. No system (at least not ours) can to 100%
> assure that code is valid. It could be hardware failures or hackers. If
> someone alters the code so the program doesn't care for security, well,
> then you've got no security any more.
> I don't believe in compile-time security.

In tunes, since the source is around all the time, any time is compile
time.  Compile-time security is all the security you need.  (See below)

> >> >> In TUNES we say stability and reliability are just by-products of
> security
> >> >> (strong typechecking).  So when we say security we mean all three.
> >> 
> >> They are very much interrelated. But this strong typechecking idea is not
> >> clear for me.
> >
> >Everything you do involves a typecheck.  Moving the mouse, typing text,
> >running a program, deleting an object, etc.  Nothing will happen unless
> >the operation to be done and the object it is operating on match their
> >types.  If they don't match, that is a type error.  All errors are type
> >errors.  There is a flexible system to describe what to do on type errors
> >(each type error can have its own behavior, the user can customize, there
> >can be defaults, etc).  Everything in the system has a type.  All types
> >are explicitly defined...  much of what you do in programming languages
> >today is just creating new types in my model.
> 
> There must certainly be other errors than type errors. How about
> communication failure, division by zero or missing hardware?

Division by zero: well, for sure this is a type error; it's been a type
error in current systems. It's a simple domain error, the domain of
denominator of division excludes zero-- it is a type which does not have
zero in it.  Therefore attempting to divide by zero is an expesssion which
is incorrect.  It will never be evaluated if it is in a strong type
checking environment. 

I assume you are referring to missing hardware at boot time.  The boot
sequence (hypothetically) includes an operation to "add driver for device
to system."  This operation's type is only valid for working drivers.  So
the driver is evaluated first, and if it decides the hardware is not
working, it won't become a driver of the right type, and the "load device
driver" operation will fail due to a type error.  This may seem contrived,
but all errors can be made into type errors in this way -- by having all
assumptions explicit in the argument types of operations.  To me, this
seems to make the system a whole lot simpler: If type checks work, you can
be sure the system is working properly!

Communication failure: some operation expects an argument (of some type,
doesn't matter what) from the networking system.  At a certain time, the
operation is going to be evaluated (it is scheduled to do so because in
order for there to be a communication failure, communication must be
expected at regular intervals), meaning it processes the network input.
Well, if the interval comes up, and the argument is not ready, then the
expression (the operation and its argument) is not complete, and therefore
a grammar type error occurs.

> >> >Stability - The need for the components that the system is composed of, to
> >> not
> >> >(be able) scrutinize the flow of run time code.  This does allow for
> >> incorrect
> >> >object usage (if binary is corrupted), as that is a logic bug, which
> should
> >> >not take down the system.  As you say, this will probably involve a
> typecheck
> >> >system.
> >> 
> >> _Compile-time_ typecheck?
> >
> >Good question.  Typechecks are done as soon as possible, but sometimes it
> >can't be until runtime.  However there is a major difference in my system
> >regarding the relationship between compiled and interpreted.  Everything
> >is partially evaluated, and "compiled" is just one form an expression can
> >be in.  "interpreted" means compiling each sub-expression, then running
> >its code, before compiling the next sub-expression.  If some
> >sub-expressions have been compiled before the "compile" function can have
> >its results (for that sub-expression) memoized (cached) so it looks like
> >the expression was compiled again but it really just read the saved result
> >of the last time it was compiled.  (done when storage is cheaper than
> >the computation of recompiling that expression)
> >Note that everything is an expression and the compiler is just a function.
> 
> Umm... I think you will have to elaborate if you want me to understand. Or
> maybe this is as clear as it can get. I hope not, for all of us.
>
> >Back to type checking, it can occur anytime before an expression is
> >evaluated, but it MUST occur or the expression can't be evaluated.  Type
> >checking ensures that the meaning of the expression doesn't break the
> >consistency of the system.
> 
> Is this something like Java? Interpreted programs? Sounds even slower than
> my old 386...

What I was trying to say above is you provide the source for programs, and
the system decides whether to compile or interpret the source.  That is,
all code is run by the same program ("evaluator"), which is both a
compiler and interpreter.  It can choose itself which to use, or you can
tell it as an option.  In addition, each subprogram (function in the
program) can be compiled or interpreted.  So the extent to which a program
is compiled/interpreted can vary.

Compiling means generating binary code for a whole program, then running
it.  Partial interpreting means generating binary code for each subprogram
of the program, then stepping through the instructions in the main program
and using them as selectors to call the just-compiled subprogram.  Full
interpreting means never generating any binary code (just using code that
is already had).

It's not so simple as that, though.  Compiling can become interpreting. 
If the program was compiled before and there was memory around to store
the binary code, it won't have to be recompiled unless the source changes.
So the program is "interpreted" by just running the binary code you've
already got.  I consider this the definition of interpreting: running
precompiled binary code.  In current systems a "language interpreter" runs
precompiled code resident in the language interpreter.  In Tunes, there is
no distinction between what is the language and what is not, so the
boundary of the "language interpreter" depends more on what code was
compiled before, than on what some language designer decided would and
would not be in the language.

The point is to unify compiled and interpreted languages.  This is good
for the language programmer because if you write a language, the ability
to compile and interpret it are available at the same time, instead of
having to write both.  It is good for the user because the system can
switch between compiling and interpreting a program, depending on whether
the user is changing the source a lot:  Interpreting is better for
debugging and development; compiling is better for speed when the code is
stable.

We can say that the "evaluator" abstracts the mechanism of running a
program.

That just explains the "compile-time" part of your question: yes,
type-checking is done at compile-time, or interpret-time, which are the
same thing.  Type checking happens before evaluation happens, whether
evaluation is compiled or interpreted:  a program with a type error can't
be compiled or interpreted.  How does it happen before evaluation, if
evaluation is done by the "evaluate" function?  Well there has to be
another evaluator in order to evaluate the "evaluate" function!

[The following may not make sense, please ask for further clarification]
Whichever evaluator is implicit is the one that does the typecheck.  That
way invalid type errors never occur in an environment explicitly and are
always caught "just in time."  That's all I'll explain about that for now.
(explaining it fully involves reflection, and is the direction we need to
go, but some other time.)

David Manifold <dem@tunes.org>
This message is placed in the public domain.