Don't hardwire OS features (was Re: Build OS features in So
Alaric B. Williams
alaric@abwillms.demon.co.uk
Tue, 3 Dec 1996 01:01:25 +0000
On 2 Dec 96 at 9:29, Francois-Rene Rideau wrote:
> I named my previous reply to the vague "New OS idea" thread
> "Build OS features in Software, not Hardware".
> But it is more general than that:
> it is about not hardwiring any feature at any low-level place.
> After all, it is the hardware+OS, not the mere hardware,
> that constitutes what the user-programmer sees!
> And this also applies to applications with hardwired interfaces, etc.
Yup. The precise placement of the line is arbitary, but it cannot be
above the line the programmer sees as the OS <-> my stuff interface,
unless the compiler is real smart and factorises complex semantics
out of the instructions it gets!
> Objects should bring meaningful information without
> this information being tied to noise.
> Current OSes and development systems
> have very low signal/noise ratio.
> This can change.
> I imagine a system where the signal/noise ratio
> tends asymptotically to +oo.
Aaaahhhhh :-)
> > I'd phrase it like so: MMUs check software for reliability the
> > /emperical/ way; let it out to play, on the end of a rope; if it falls
> > down a hole, we can haul it out.
>
> Yes, and that would be fine if tasks did not have to communicate
> and rely on each other.
> Only no task can be useful if it is thus isolated!
> Utility of a task is proportional to the knowledge
> it can exchange with the rest of the system;
> hence an MMU is not very useful, because it only
> manages gross approximations to security concerns,
> that can be done better in software,
> given proper compiler support
> (of course, MMUs are fit to current unsafe compiler technology).
I love your capacity for damning remarks, usually including the word
'lame' ;-)
> Imagine that you have a routine
> that can display objects of format XXX very fast,
> but that would do nasty things if given bad input.
> Imagine that fully checking the format would be dead slow
> (imagine an NP-complete checking problem).
Right.
> With Tunes, you could add a system-enforced prerequisite
> that input to the routine should be of the right format,
> and it would ensure that only objects of the right format
> are given to the routine
That'd be a precondition in Eiffel or the proposed ARGOT language.
> (which could involve full slow checking when no other proof is found).
It could involve satanic divination rituals if the implementor sees
fit!
> Imagine that some operation on an object must be atomic wrt some other.
> In current unsafe systems, you have to rely on a conventional lock system,
> that cannot exclude deadlocks.
> In Tunes, some resources could have a prerequisite
> that they cannot be locked by objects
> not proved to release the lock in time.
Proof of this could result in having to prove that an algorithm will
terminate - that old chestnut!
I'd be more inclined to research alternatives to standard locks. Not
that I have yet, though, so don't get too excited.
> In Tunes, the interrupted task would be basically be told
> "now do your best to yield execution ASAP,
> after putting the system in a rightful state".
Yes, that's the kind of thing I'm aiming at with implicit yields. But
there's a temporial and informational efficiency penalty!
> Inserting yields shouldn't bloat the object code,
> since it will be compressed out in disk-stored versions of object code,
> and only inserted at execution time, when they can indeed be inlined!
> Plus all the bad-case-checking made unneeded by cooperation
> will help win back the space lost by those yields().
I was worrying about the in-memory space! Larger machine code = more
swapping...
> > Trap handlers in the OS don't take up much space, and I don't expect a
> > runtime compiler to be a small piece of software.
> >
> Trap handlers in the OS do take place.
Yes, but not an incredible amount! They fit in QNX's 8k, along with
some other stuff, right :-)
> More than the total size of an embedded application.
Hmmm...
> > A QNX kernel with hardware memory protection is 8k. An ARGON kernel
> > with a bytecode compiler in it will probably be more like 200k.
>
> The whole of an embedded system could fit the same 8k, not just the kernel.
> And of course, there is no need for any compiler to fit into a kernel,
> or for a kernel altogether.
Indeed; an ARGON embedded system would consist of harcoded ROM
entities, with state in RAM. They would always exist in the active
state, never being folded into portable form due to the lack of
secondary storage...
> > Really? How do they do that? If that stands for Minimal Instruction
> > Set, then prepare for even more object code bloat if we have to
> > produce four or so 32-bit instructions for something a 486 could do in
> > a couple of bytes (AAM comes to mind...)
> Minimal Instruction Set Indeed!
> As for bloat, I see no problem with size of speed-critical sections,
> as long as it runs blazingly fast!
Yeah, but it's /all/ big, not just the good bits :-)
Unless we have pairs of CPUs - a MISC controller, with a set of CISC
coprocessors to do stunning single-opcode FFTs, raytrace in hardware,
that sort of thing!
> For other sections where code density counts,
> then compression should be achieved by threading methods,
> and MISC computer allow *very* compact and efficient threaded code,
> with calls that fit 15 bit!
That's interesting. What do you mean by that?
> You just can't beat that on an Intel or a RISC!!!
> And you just can't say that MISC is slower than CISC
> just because it doesn't emulate CISC as fast as CISC runs!!!
I agree with that. [R|M]ISC is easier to compile efficient code for,
I guess. Not that I've ever tried to write a compiler for anything
non-x86... :-(
> You should compare Size/Speed of whole chunks of equivalent code,
> between systems of comparable technology/price.
> Hence, with the price of a single PPro system,
> you should compare it to a cluster of tens of MISC units in parallel!
One thing I don't champion is Intel hardware! :-)
> > Hmmm. I would beg to disagree; I interpret that as the RISC-style
> > argument that we should have real simple hardware being driven by
> > software. The software has a small set of operations from which it
> > builds everything else up.
> >
> Exactly! Don't have bloated hardware that costs a lot,
> can't adapt, and does things slowly,
Bloated hardware can do things faster... do you disagree that if
something can be done in software on a general purpose CPU, a circuit
can be made to do it much faster? Yes, it's hardwired and inflexible,
but if we're building a computer for multimedia applications we can
guess that it'll benefit from hardware MPEG decoding!
> > Well, I think that's a fine place to start at. Sure, use a RISC CPU.
> Nope. Use a *MISC* CPU. Clusters of if you need power,
> though one is enough as a cost-effective solution.
RISC/MISC - terminology. Perhaps confusingly, I was referring to the
original idea of RISC, which is indeed now called MISC :-)
> This only proves
> that efficient code generation should be machine-specific,
> and that a really efficient portable language
> should not be low-level, but high-level,
> so that it transports enough information for such
> high-level optimizations to take place!
Yup!
> FFT should NOT be in any instruction set.
> Instead, it should be in compiler libraries where its place belongs.
That's semantically identical to it being in an instruction set! Note
that qsort(), the quicksort, is in the C instruction set...
> Yes yes yes. Object code should be irreversibly produced for a particular
> target, while a *high-level* language or maximal possible expressivity
> should be used to represent user-level and/or portable objects.
Until we develop a truely fundamental theory of computation,
anyway...
> > Reminds me of a design for the "perfect" ARGON workstation - one CPU
> > per object, each with the RAM that object uses! :-)
>
> Nope. Because a good system's software objects
> will dynamically adapt to user interaction at blazing fast rate,
> whereas the hardware evolves only very slowly.
> Hence it is not feasible to buy and install a new CPU
> for every single object being allocated.
Ah but you've got so many CPUs that the system never needs to fit two
on one... and indeed one entity can straddle many CPUs. But this is a
dream machine, so let's not get too excited about it's
implementation!
> > Hmmm. Modern compilers (such as gcc) can write better assembler than
> > most humans - except for such chestnuts as C's semantics being too
> > lightweight; there's no rotate instruction in C, and the compiler
> > cannot isolate pieces of code that are equivelant to rotates well
> > enough to use hardware ROT instructions.
>
> This is no problem. See how linux/include/asm*/*.h does it. Neat!
Yeah, but that's not part of the C standardl writing "portable" ANSI
C, we can't assume anything like that, and have to write rotation in
terms of << and >> and | and all that stuff.
> Actually, the whole of a compiler could be written like that,
> if given a more expressive tactic language
> than plain non-recursive cccp macros.
Yup!
> > A language like Eiffel or maybe Ada is well suited for high and low
> > level coding.
>
> Eiffel and Ada *suck*. Use functional languages instead:
> CLOS, OCAML, Clean, etc.
Another of those cutting comments :-) My plan for ARGOT (the
language) bytecode contains some functional features. I won't be
studying much functional theory until I get into University, though.
Unsurprisingly, I want to take a degree in computer science :-)
My admissions interview for Cambridge is on Thursday - wish me luck!
> >> This means the end of the need to explicitly generate/parse,
> >>check/blindly-trust and convert/ignore file formats,
> >>which eats most CPU time,
> >
> > But there need to be agreed formats!
> >
> Yes, but agreement should be found within the system
> without the casual user having to know about it!
Yup; I don't see how this can be done without the programmer's
input, though - as in completely automatically!
> > How do we transfer a copy of A's picture to B?
> >
> *We* don't. The computer system does it silently thanks
> to its RGB->YUV tactic.
The royal we :-)
The programmer must have told the system about some kind of way of
folding an image to a data structure consisting of only standard
objects, which can be reconstructed elsewhere, though - independently
of the internal representation of images at either end.
> > If A and B used the same internal representation of pictures, then we
> > could transparently transfer that array of integers. But they don't.
> >
> > Converting Yu'v' to RGB will induce a loss of information, for a
> > start!
> >
> Only if the converted version is meant as a replacement for the RGB version!
> On a Good System (TM), the YUV version will only be a cache for displaying
> the original RGB object on YUV systems. The system knows that
> the original object where the information is taken from is the RGB thingy.
But this is assuming that the RGB system understands some concept of
Yu'v', and the Yu'v' system understands RGB. And they must both have
a concept of the UVW space used by the truely wierd computer in the
corner.
I'd be inclined to say "Right, Yu'v' with 32:16:16 bit precision is
enough to specify an image for the human eye. We can transmit an
image as three arrays, one for each component, and we can tell the
system it's safe to apply lossy waveform compression to these arrays
in transit.
Things like "array" are 'atomic' for transmission, in that they
explain themselves, as do things like integers.
We might want to allow variable precision on those Yu'v' channels. In
fact, why not. Let the precision be controllable by the programmer,
defaulting to 32:16:16.
> > I take it you're saying that the interface shouldn't be written into
> > the meat of the code, right? As in we should be dealing with:
> >
> > class Complex {
> > void add(Complex &c);
> > void mul(Complex &c);
> > };
> >
> > rather than
> >
> > class Complex {
> > void process_command_string(char *text);
> > char *render();
> > }
> >
> Right.
Good, I understand thus far :-)
> > The latter class can be communicated to by the user at runtime,
> > whereas the former class can be communicated to by programmers! The
> > first case defines no user interface at all, whereas the second case
> > would force the use of code like:
> >
> > Complex a,b;
> > a.process_command_string("Please add to thyself the complex number " +
> > b.render());
> >
>
> Now you also know why /bin/*sh, m4, TCL, some C/C++ packages
> and all other such stuff suck:
> they use dirty backdoor evaluation as a hack to compensate
> their lack of recursive or reflective expressiveness.
Another wonderful nice sentence :-) I'm not flaming you for these,
BTW, don't be offended... I just admire your directness!
> > How do you intend to generalise the definition of interfaces like
> > that?
> >
> Do you want a formal definition for an interface?
If you think it'd help me!
> * an channel is a structure with which an objects can send data to another;
> it needs not be a stream of raw binary data.
> For the object sending the data, the channel is output;
> for the object receiving data, the channel is input.
Yup; same in ARGON. A channel has to have the possibility of being
represented as streams of bytes, though... we can't expect it to be
able to transmit infinite data sets etc, or parallel quantum
possibilities, or... (minor technical point!)
> * a *display* function for type T on output O
> is the data of function f from T to O,
> and a reconstruction function g from O to T,
> such that someone observing the output
> from f applied to an object of type T
> can reconstruct the object thanks to g.
Indeedy.
> Example: 123896 is an integer being displayed;
> you can reconstruct the meaning of the integer from this string output.
Uhuh.
> Example: outputing a 1000-line text on a tty without scrolling
> is displaying the text. Because of the time needed to read things,
> one may consider that an acceptable text display should
> pause before scrolling forward.
Uhuh. I'd make the point about implicitly splitting input from output when we
have 'interactive' controls like slider bars here.
> * for any input I and output O connected to a same external object,
> the couple (I,O) is named terminal I/O.
Uhuh.
> * a *reversibly interaction* on a terminal I/O
> is a function from I to O with internal state S,
> such that for any two states s1,s2,
> there exist input sequences t1 and t2 to go from
> state s1 to s2 and from state s2 to state s1 respectively.
> Note: an I/O-interface is *not* a particular O-display.
Erm... yup.
> * An *interface* function for type T on terminal I/O
> is a display function from type T
> to reversibly interacting objects on terminal I/O.
> Example: piping a 1000-line text into /usr/bin/less that allows
> to scroll up and down is interfacing the text;
> piping it to /bin/more isn't.
> Example: broadcasting a song on radio is displaying the song,
> but not interfacing it;
> putting it on a tape that one can rewind is interfacing it.
Yes, I'd agree with all that.
> You can add criteria on the above objects
> for the computability/acceptability/complexity/whatever
> of displays and interfaces.
Right! I take this to mean, you /do/ explain how the object should be
represented/interacted with when you are designing/modifying it :-)
> > In the case of the first Complex class, how can we tell the interface
> > generator that humans like complex numbers written in "a+ib" format?
> By giving a display tactic for complex numbers,
> based on a reversible definition for a representation of complex
> numbers in the form a+ib that could also be used to
> generate an input tactic...
Uhuh.
> Surely, any knowledge in the computer must have be put inside by humans!
> This is not what I pretend to change. No one can.
> What I'd want to change is the fact that currently specific interfaces
> are hardwired into object code,
> whereas I want generic interfaces to be defined apart from it,
> by a system of annotations to code and tactics to dynamically
> generate interfaces.
What I take this to be getting at is not that you're aiming for
"self interacting objects", but more for a philosophical thing in
programming style that we keep the interface seperate from the
"gubbins" (innards). Ie, don't even think about designing the higher
level interfaces until you've made the object work from the most
basic and fastest interface of "a.b(x,y,z)" or whatever your
favourite syntax is!
> > How will you implement this interface generation thingy?
> >
> By reflective epsilon-calculus:
Oh boy.
[jargon snip]
Um... can you recommend a net-available introduction to
epsilon-calculus?????
> >>I could also talk about other things that traditional designs do ill,
> >>like building a wall between users and programmers;
> The wall I'm talking about is that which forces me
> to wait for Monkeysoft(R) to integrate the feature I need
> into Worse(TM) version 354.3226,
> or to rewrite everything from scratch myself with just my feature added.
> Only very proficient and rich people can afford serious software
> development on a closed system. An open system allows anyone to contribute.
Ah, an FSF-style openness. I agree!
Finally, the fact that different languages are given to
> "programmers" and "end-users", each being crippled in many ways,
> constitutes a barrier between them.
Uhuh. Not that we want to bring back the days of BASIC as the OS CLI,
like on the old BBC micros - although that sure merged users and
programmers, the implementation was naff!
> As for security concerns, crippling interaction languages ain't
> a reliable way to protect: on a MacIntosh, icons will never stay
> in place because there will always be someone to volontarily or not
> drag the icon into another, possibly the garbage can. The solution
> is not in crippling, but in extending the language so as to allow
> explicit restrictions, such as "the icon must stay where I put it",
> or "only people with the right password can modify the heart of the system".
Right. You'll be comforted to know that I designed the ARGON
paradigm, then invented a security layer for it, rather than vice
versa :-)
ABW
--
Governments are merely protection rackets with good images.
Alaric B. Williams Internet : alaric@abwillms.demon.co.uk
<A HREF="http://www.abwillms.demon.co.uk/">http://www.abwillms.demon.co.uk/</A>