Amiga and Acorn OS
Francois-Rene Rideau
rideau@clipper.ens.fr
Tue, 8 Aug 95 0:32:54 MET DST
[Note to Michael: I'm cc-ing this reply to the TUNES mailing list]
[Note to the Tunespeople: this mail is the sequel of an exchange with
Michael Hooker, who is having his own personal OS project; with
Mike's permission, I could send the messages in this exchange to whoever
asked for it]
[Talking about bit streams as a fundamental abstraction]
Well, while I think I have demonstrated in my previous mail that
bit streams are not a universal paradigm for programming, I gladly
reckon that indeed, they are fundamental, and could well be the most
basic asbtraction for the portable low-level part of TUNES, while
specific implementations would group bits from streams to machine
words of various sizes.
[Talking about Amiga's and Acorn RISC machines]
> I have had the fortune of using both to a great degree firstly having
> owned and used amigas (and having been employed programming them)
> and due to the fact that when i was in school
> I got to have a great amount of time using the Acorn,
> it of course left the DOS/WINDOWS puke for dead
> but didnt come quite up to scratch as far as the Amiga was concerned.
Ahem. I couldn't say which of AmigaDOS and RISCOS implemented which feature
when, but having seen the RISC OS recently, I've found it quite as good as
what you describe me for the Amiga: seamlessly integrated libraries and
concrete object types. [And the RISC machines rely on much cleaner hardware
than the Amiga, but let's focus on the software side].
Which ever is the first, if there's one, is of no interest to our
respective projects.
[The Amiga Libraries]
> They were also to a degree language independent, though they are mostly
> implemented in C and thus often take and return pointers to struct's the
> actual library interface definition was provided in the form Of a list of
> function names and the registers (CPU registers) in which the paramaters
> were to be passed.
I particularly appreciate this parametrizable calling convention,
though I don't know how far it goes, and how RISC OS does it.
What I know is that TUNES will have completely parametrizable
"calling convention" that at low-level correspond to memory and register
mapping, but at very high level can be any kind of dynamic constraint on
the context in which an object is to be evaluated.
> The amiga was an extremely lightweight
> machine many a time I have cursed my 486-100 for doing jobos slower than my
> 68020-14 Amiga ;)
I guess you're talking about response time.
> This is of course not the case when it comes to say compiling
> with GCC or compressing uncompressing etc but for most practical
> purposes it acted much like a micro or exo kernel in the respect
> that most functions you called would go directly to hardware.
Ahem. Micro kernels are *slow*, because there is a wondrous number of
context switches. Exo-kernels are *fast*, because there is a minimal number
of context switches. The concepts are just opposite one to one another!
AFAIK, the guys who invented micro-kernel are the most stupid CS bummers
I ever heard of: they concentrated all the overhead in a small place that
infested all the system. On the other hand, the guys who invented
Exo-kernels are clever: they completely freed the system from the system
call overhead.
> Just before the demise of the Amiga took
> place many interesting developments were taking place in the form of device
> independant graphics.
Is the Amiga dead ? I've heard a german company will make it born again,
and will intends a second generation of RISC machines after the 680x0 line.
> I know its not the greatest machine on earth and it is far from any
> idealistic lines that Itendto think along these days. But as far as
> credit where credit
> is due I think it represented the best of a dying age of patched together
> crap we say around us :)
Yeah, the Amiga was a nifty machine. I still see nothing against the Acorn
machines either. But nonetheless, whoever credits go to won't help up a lot
about what to do in the future.
[About Unique IDs]
I am very much aware (I think) of the importance and the need of
*Unique* IDs for objects in the context of a persistent distributed
(world-widable) system, and agree with all the arguments you developped
in this sense. Actually, I have never opposed and always proposed that
there be ways to seamlessly and reliably identify in a unique way objects
accross the persistent distributed world-wide network of computers.
Now, what I've long opposed is that there be a *flat* identification
space. I think it is just grotesque for many reasons, which suddenly
look like the same to me as why C pointers make a lame language out of it,
while the LISP/ML way wins: it provides a clean model and allows adapted
implementation instead of forcing an unadapted one and thus guaranteeing
inefficiency while stupidizing generations of programmers.
> I think
> the biggest problem with self is primarily ist reliance on SPARC with 48 megs
> of ram.
Now I know why I couldn't successfully run it !
> A self like environemnt could be far more fine grained and modular.
See the Merlin project, isn't it, Jecel ?
> The only other idea that I see as imperative is the ability
> to program in a retrospective sort of way. That is that one
> particular object forms itself as a result of other objects
> it is only natural that the changes to this object
> flow through to the object.
Functional languages have other names for this: referential transparency,
etc. While I do not deny the interest of the way such things are implemented
in languages like SELF, I want to point out that before we can develop an
idea in the right direction, we have to understand it well, and to
crystallize our feelings about it in a well settled theory (if possible),
which will make things clear and simple.
> Not messages or functions: Another thing that I wish to make clear
> is that there
> are different ways in which we may wish to view Objects other than messages
> and/or functions.
Exactly ! My opinion is that objects should be defined by their semantics,
and that we may choose to view them in any way we prefer (i.e. use morphisms)
so it fits our needs.
> A dataflow model such as those found in the prograph
> language
> or the cube language best reresents what our smallest of components in a
> computer actually do which of course has nothing to do with a procedural
> list of algebraic computations but more directly with ones and zeros passing
> through and's and or's and not's etc.
I admit I never heard about these languages. Have you got any pointer ?
> Such a model can easily represent
> messages and functions and thus takes the level of flexibility another step
> further. The way in which I would see this implemented is by attaching the
> readable contexts of an object To the writable contexts of another.
I'm not sure what you mean by readable and writeable contexts, though
I think that some time ago (before I knew about functional languages), I
could have said such things. I think what you mean is expressed much more
simply by function composition in a functional point of view: the object
with the readable context is the one applied first, and the one with the
writeable context is applied to the result; isn't it ? Again, this is a
matter of point of view/vocabulary.
Now, to me, the question is what kind of constraints are expressible
for function composition ? For instance, C cannot express any constraint,
and not only is its type system trivial, but it almost forces the linear
memory model.
> This I
> think summarises more clearly what I intend to do as faar as a computing
> language/environment I have been experimenting using C and C++ over the last
> year or so regards how I can best represent this model on top of both my
> amiga
> and linux environments and
Well, again, I prefer talking in terms of semantics rather than syntax.
So what kind of expressivity do you want to have ? I want to allow just the
widest possible expressivity.
> hopefully eventually as a stand alone machine with
> its own custom processor to which I believe it would create a far simpler
> faster more parrallel model that will grow far more rapidly than conventional
> procedural algebraic processors.
If you intend developping custom hardware, I suggest you have a look at
the MISC processors and mailing list: *cheap* *fast* **simple** hardware
that is IMHO the bright idea that shall overcome when people understand how
unadapted to technological reality is the old Von Neumann model, and how they
lose 99.9% of their power and money at emulating this stubborn model with
bloated hardware.
> What I really liked most about the tunes project is that it represents much
> of what I was thinking about when I was starting this project. And thus I
> believe that you would probably draw closer and closer to the conclusions
> I have drawn as time went by. To that end I would be interested in hearing
> about possible help in getting it running by itself without the crappy
> overheads of an OS. I appreciate your responses to my mail as you are
> keeping me thinking where i had got generally disheartened by it all
> Microsoft, C etc and you have kicked me back into a more of doing mode.
I do too appreciate exchanging ideas with clever guys like you, and this
also is why created the Tunes mailing list. Would you mind continuing this
discussion on it ? Shall I add you ? You know, there are other people there
not directly participating in the project...
More later...
-- , , _ v ~ ^ --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
-- ' / . --
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System
WWW page at URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"