Returned mail: User unknown

Mail Delivery Subsystem MAILER-DAEMON@dmi.ens.fr
Sat, 6 Mar 1993 02:45:47 +0100


   ----- Transcript of session follows -----
While talking to whistler.sfu.ca:
>>> RCPT To:<mooser-programmers@sfu.ca>
<<< 550 <mooser-programmers@sfu.ca>... User unknown: Invalid argument
550 <mooser-programmers@sfu.ca>... User unknown

   ----- Unsent message follows -----
Received: from clipper.ens.fr (clipper-gw.ens.fr) by dmi.ens.fr (5.65c8/ULM-1.0)
Received-Date: Sat, 6 Mar 1993 02:45:47 +0100
Received: from corvette.ens.fr by clipper.ens.fr (4.1/88/01/19 3.0)
 	id AA18855; Sat, 6 Mar 93 02:45:42 +0100
From: rideau@clipper (Francois-Rene Rideau)
Message-Id: <9303060145.AA18855@clipper.ens.fr>
Subject: ORG,KER+ far12
To: winikoff@mulga.cs.mu.OZ.AU (Michael David WINIKOFF)
Date: Sat, 6 Mar 93 2:45:44 MET
Cc: mooser-programmers@sfu.ca
In-Reply-To: <9303041213.1241@mulga.cs.mu.OZ.AU>; from "Michael David WINIKOFF" at Mar 4, 93 11:13 pm
X-Mailer: ELM [version 2.3 PL11]

Here's my reply to Michael's reply.
He flamed me; I somehow deserved it as I called for answers, even flames.
I also must clarify (and sometimes change) my point of view to take into
account his arguments.


>> > >> > Giving device drivers their own priority -- good.
>> > Ok, but WHO (what processes) can set up device drivers ?
>> > if anyone can call its own local driver, there's no more security. See (1)
>> THis comes back to security -- any comments/suggestions?
>> THe concept of a superuser is a relatively simple (if rather insecure) way
>> of doing things.
>> Of course we'll have to introduce the notion of process ownership ...

I prefer the more general 'you know its ID, you can use it' phylosophy. Then
you can have a 'user' object with the list of its known objects being limited
to some objects, and some aspects only of objects not entirely owned.


>> Alternatively we can decide that this is a single user system so anyone logged 
>> in from the console is "safe" ...

Well, we should have a superuser console; but the common user shouldn't need
use it in everyday life if the system is stable (if he has access to it).



>> > I know - tell me more if you know others). That's why the system should offer
>> > many different capabilities of allocating memory, from raw reservation of
>> > contiguous physical memory (to device drives) to garbage collecting-aware

>> NO! The kernel should offer a single simple and (hopefully) sufficiently 
>> general method that is used by all.
>> Why? It is not possible to anticipate the needs of all languages so we should 
>> make the system "extensible" in some way -- given that this is done it is
>> a waste to stuff some of the language specific memory allocation into the
>> kernel.

Hey ! who told you the Kernel should do that ? Of course the Kernel won't
provide any memory allocation routine; it will only branch you to the memory
device you need (only a basic device being available on a basic system).



>> [Stuff about us copying Unix and how bad MS-DOS and MacOS are deleted ...]

>> (1) There's no need to go around insulting DOS and Mac -- we all know DOS's
>> shortcomings and the Mac was a good idea at the time.
>> (besides -- I do now some computer scientists that love Macs)
(I don't hate Macs; I like them, but an unextended Mac is only an
unprogrammable toy, and the development kit is much more awful than the
standard user interface).

>> (2) The reason why Unix is so popular is that the original (small) Unixes 
>> had a number of good ideas (that tend to be taken for granted these days)
>> [Eg. device independant I/O]
>> Unix DOES have weak areas -- I see nothing wrong though in copying Unix's strong
>> areas.

What I wouldn't want us to copy is not only one or another of those OSes'
flaws, but mostly their philosophy, which leads to what we already have. I
think if you follow unix philosophy for example, you won't do better than
a twenty year old still evolving product on the which many a group work.

>> > There we come to an important point: error handling. Of course, old OSes
>> > (stands for Old Shit ?), being based upon C, couldn't integrate this concept,
>> > neither could they understand anything sensible about HL programming (nor
>> > LL programming with respect (?) to DOS - Double Old Shit (OS/2 being half-old
>> > shit). Recent language like ADA, later version of (true) C++ and CAML do
>> > include exception handling, and many HLL I don't know certainly do. I like
>> > very much that of CAML (not knowing the others - tell me - 't'should be
>> > mostly taken from the same language as for C++, but CAML should be better
>> > because of automatic genericity as opposed to C++'s template hack). The
>> > Kernel should include exception handling as a standard, so that objects
>> > can exchange not only usual data, but also exceptional data (notice that
>> > these language do not allow embedding exception in data itself by declaration
>> > of exceptional format as should accept the language I vouch for (of course,
>> > their are ways to obtain equivalent results, but then why not repeat we all
>> > use Turing Machine equivalent languages ?)

>> Were designing an OS - not a langueg.
 Yes, but to me the OS (not only the Kernel, but the OS as a whole) should
provide languages with everything they may need. If the system is as passive
as unix, you only force app writers to do the stuff you refused to do. The
result will be less efficiency or more effort from the app writer (if he is
willing to optimize things), and no communication possible between apps, as
each had to redesign its features.
 To me OS and language issue are linked.
 That's perhaps an essential point to agree or disagree with.
 I propose we have a vote on it (see end of the letter)

>> I'll come back to this point again, I think you're trying to throw a lot
>> of what properly belongs in a language into the kernel -- thereby imposing
>> a uniform object oriented view on all users of the OS.
>> 
>> Personally I will disassociate myself from MOOSE if we decide to enforce
>> OO programming on everything ... I feel that OO is overhyped.
>> 
>> Personally I'd much rather use a Very-High-Level-Language like Haskell --
>> OOPLs are based around imperative languages and are lower level as a result.
>> 
>> > 
>> > (BTW, who do have read my HL specs, and what do you think of it ? Do flame
>> > its flaws, you're welcome, but do encourage what pleases you in it too.
>> > Do not hesitate to ask for more precisions)
>> > (NB: CAML is a particular version of ML integrating imperative programming
>> > as well as declarative one; we work with it at the School (Ecole Normale
>> > Superieure) in its CAML light 0.5 implementation by Xavier Leroy & Damien
>> > Doligez; it's available for example at ftp: nuri.inria.fr - neither place
>> > nor time to tell more about it here; unhappily, the syntax being to concise
>> > it very dirty, as opposed to lisp's lots of insipid and stubborn parentheses).
>> 
>> I've used SML for a moderately large project ... my personal opinions:
>> (1) SML's syntax is *HORRIBLE* -- it's got all the rebundancy of Pascal ...
>> Haskell's or Miranda's syntax is HEAPS nicer.
of course ML has a horrible syntax, but it has interesting characteristics
to implement if we are to facilitate language implementation and (what is
more important) standard inter-language communication .

>> (2) Exceptions should be reserved for programming in the Large and possibly 
>> removed -- trying to isolate an exception is a major pain.
>> (3) Ditto for side effects -- one of the bugs that I had to isolate involved
>> a function that modified it's argument being called without first making a copy
>> of the argument

>> Objects:
>> > >> > I like the approach of simply defining a standard format.
>> > Well, if not, this would no more really be OO'ed, and be another DOS, waiting
>> > to be (VERY quickly) obsolete !
>> > OO standardness is compulsory. To be in advance with other existing OS
>> > projects, we must also include NOW genericity (for the which C++ templates
>> > are only a hack) and logical programming (the most common use is find how
>> > to find the "best" path to transform a virtual object into another, knowing
>> > the elementary virtual transformations available and their physical cost).
>> 
>> Yet another OO fanatic ... :-)
>> I repeat -- objects are not magic ...

They're no magic, they're a fast data AND CODE interchange protocol.
Sometimes I HATE the limitations OO programming has upon my code (see Borland
products and their OO coding); but then I see it is because of the
implementation which allows only one coding for the same object; compiled
private objects should be optimized, not like public objects, so that the
limitation disappears.


>> > >> > Should we have spinlocks too?
>> > On multiprocessor systems, of course, but that's a kernel implementation
>> > concern; let's not mix specs and its impl' again !
>> 
>> Yes -- however if we neglect them then the kernel on a multiprocessor will
>> have a different interface to the kernel on a monoprocessor.
>> I suppose this would b einevitable anyway though .. :-(

If I understand, a spinlock is the same principle as the whole task managing
stuff: one at a time has access to the resource; so having a sub-task manager
should be the same as having a spinlock. If not, can someone detail spinlock
functionment ?


>> > How complicated !
>> > Again, let's program it by layer. The lowest (kernel) layer does a Andreas
>> > says. Then, you can have a filter monopolize the lower-level resource to emit
>> > events on a queue, then if you want, mix that queue into a general event queue
>> > for stubborn processes to un-multiplex the global queue, as stupid current
>> > systems do. YES, you CAN do it ! But once you see there are simpler, easier
>> > means to handle data, by piping data just where you want, you WON'T use the
>> > centralizing algorithm. Everything is easier, neater, quicker, better, when
>> > objects just fit one into the other.

>> Think of it as a server object for timed events.
>> BTW I/O events have nothing to do with this -- the kernel could easily send
>> them directly to the correct address when they are generated.
That's what happens, but the address changes dynamically with the environment.

>> I feel however that this addressing should be centralised rather then be
>> scattered among the various device drivers
>> (1) It means less repeated work in the device drivers
>> (2) It's simpler to keep track of and maintain.

(1) less repeated work, no as in both systems every event is managed once by
just the proper code (so the distance between interrupt and catcher is lesser,
equal in the worst case) (common standard libraries can avoid redundant
coding)
(2) Yes and no; by separating unlinked events, we may allow separated
debugging, etc; Both systems are equivalent to a filter; but (partly or
wholly) centralizing from this system is cheap/immediate, whereas the
converse is slow.


>> > Of course again there are several layers ! Every program uses just the
>> > one it pleases. Every layer has a standard interface; every interface can
>> > be filtered for programs not to interfere each with the other.

>> The existance of multiple layers hasn;t been mentioned yet.
(there it has :-) (what do the other think about it)
>> I don't consider this obvious or even desirablefor efficiency reasons.
>> Each layer of indirection multipllies by the inefficiency of the previous layers

Hey, but we're not in a DOS chaining where every layer must intercept the
previous and call it afterwards; each layer can be called directly, and if
it uses a sublayer, that should not be slower (only possibly quicker) than
if an app has to do it itself. Layer does not necessarily means filter, and
when filter there is, it can be optimized at compile time to short-circuit
calls when possible (so there is no delay in communication unintercepted
by the filter).


>> > Well, there's a difference between supporting a language and have it as a
>> > primary language. ANY system can run ANY language as long as it's powerful
>> > enough. You CAN run AppleSoft BASIC on a Unix WorkStation; but you just WON'T,
>> > because it's without any interest but historical. That should be the same for
>> > MOOSE and C/C++: of course you can still use C, but you won't be able to
>> > -directly- use all its power with such a LL dirty language, and you will have
>> > to include many a library to interface both. Moreover, standard LLLing allows
>> > easy implementation of ANY language you want, by providing a fore part of a
>> > compiler from the new language to any (combination of) existing standard layer
>> > of THE PL.

>> And there's a difference between supporting (allowing to conveniantly interface
>> to OS services) and being able to run by writing our own abstract kernel 
>> interface ...

YES, and that's why we really should support ANY language to achieve perfect
communication between ANY programs.



>> > MOOSE should be Pee O aRe Tee A Bee eL Ee. However, we all come to have 386
>> > PC's, because cloning has made these stupid computers CHEAP and STANDARD.
>> > But, we'd love  to be able to run our new standard OS on an even CHEAPER,
>> > NON-STANDARD, BETTER computer (like, say, the Amiga or RISC machines).
>> 
>> I hate to say this but the AMiga isn't better -- the A600 and A1200 are based
>> around antique processors (68000 and 68020 respectively) and don't have anMMU.
>> Note that to the best of my knowledge NONE of the AMigas have MMUs.

It depends on the PC and the Amiga; but I'm not going to have a flamewar
on this subject in this mailing list.

>> The good thing about Amigas is the operating system 

YES, and the copros (and the fact the OS runs perfectly with the copros)



>> > >> > Persistent objects subsume both processes and files.
>> > >> > Getting rid of the concept of the files is (IMHO) a strong forward step.
>> > see what I said above about Unix, files, and objects.
>> > 
>> > >> > One idea which may simplify swapping is to use a single address space across
>> > >> > all objects -- possibly even the one corresponding to where they are stored
>> > >> > on disk.
>> > >> > Of course most of the address space will be inaccessible ...
>> > >> > 
>> > >> > The main advantage of this scheme is that it makes shared memory simpler to 
>> > >> > program with -- otherwise pointers have to be avoided since the addresses
>> > >> > will differ from process to process.
>> > >> > 
>> > >> > [But see arenas later]
>> > Again, that's for implementation eyes only. See (1b)
>> 
>> Nope -- it affects the semantics of shared memory that will be visible to 
>> appplications.
What is so transcendantal about sharing memory ?
if you know the object's intimate ID, you can read/write directly to it; if
many know, they all can. That's all, no comment. Is there something great
about it ? Sharing memory should be but a restricted aspect of sharing and
exchanging objects.


>> > >> > (8) Microkernel:
>> > >> > 	Seems to be the way we're heading. Good.
>> > 't'should be able to run even on my HP28 ! Of course, you won't have any
>> > tools, then, hardly a few devices for the simplest I/O.
>> 
>> No -- Microkernel doesn't refer to the final system size.
>> It refers to the system architecture.
>> A microkernel design simply means that the kernel provides a minimal set
>> of services (Eg. memory management, multitasking and IPC) and other
>> system services (EG. file systems) are implemented outside of the kernel.

I misinterpreted our comrad's words. So much for me ! But I'd still like the
Kernel plus few simple devices to fit an HP calculator !



>> > Hey, this has few to do with an OS specs: Very Unportable ! Let's leave this to
>> > LLL implementation(s): sure it can be used, but we don't use assembler hacks
>> > -only- for the fun of it, but also to find HL requirements. What do 680x0
>> > programmers would say if they heard you with these 386 tricks ?
>> > There's only one thing at HL: you may want to allocate an object, and you may
>> > want to publish it, so that others can see it where you put signs.
>> 
>> Caught out on this one -- I have never programmed a 386 and i have programmed
>> a 68000 -- I guess that makes me a 680x0 programmer :-)

>> A problem that has to be handled when using shared memory is that pointers
>> into the shared memory region are not normally valid across processes and
>> hence one cannot store data structures in the shared memory region that use
>> pointers -- this differs significantly from the semantics of normal memory.
>> 
>> My suggestion ensures that shared memory has (in this respect) the same
>> semantics as normal memory.

But this forces objects to align at MMU page ! That's too much limitating
OOing, which should accept objects any sizes (most of which will be tiny, as
subpart of larger objects)



>> > >> > (13) Should we be providing threads? (I don't think so but am open to debate)
>> > what are threads/tasks but little/large objects ? Why build such a distinction

>> Threads share data space whereas tasks/processes don't.
>> Tasks are objects.
>> Threads would correspond to concurrent method activation within an object.
>> Since the data is shared between the threads locking becomes important.

Again, that seems a totally arbitrary distinction: you always share
something with someone (or else, the process has NO interest); UI programs
would like to share windowing memory with the main UI device; piped programs
can happily run with a shared memory pipe, etc; there's no limit to sharing
little or big objects. On the opposite, you'll never want to share
everything (at least, you won't have the same stack and/or registers). So
threads and tasks are no absolute concepts, but relative ones: if you share
memory with another one, you're a thread to it, if not you're a task ! That
becomes ridiculous. If unix created two words, it's because of a unix badly
fixed misconception, not because of an improvement.



>> > * about DOS FAT systems:
>> > Some talk about reading/writing it. Of course, this is obvious on 386 based
>> > computers. But DOS FAT is so @#$^#$%^& that no sensible people would willingly
>> > use an OS with it as a main FS. Whatever FS is choosen to work on, this should
>> > be included as a device driver, not in the Kernel.
>> For the moment DOS compatibility is a definite plus.
Yes, and in the future, OS/2, linux, * compatibility also; and an emulator
would be even greater; but this has nothing to do with the Kernel, and DOS
FAT shouldn't be our primary FS (which does not mean we cannot have a generic
FS running inside DOS FAT system, as any other FS)

>> > * Could one explain a poor ignorant frenchman the joke about Moose ?
>> > ('heard you talk about a Mr Moose; who was it ? )
>> Some american cartoon ... :-)

Is there nothing more about it ?


>> > * dmarer:
>> >  Keep the Kernel as pure as possible: OK.
>> >  Allow non-OO programming: ??
>> >  - if it means device drivers are not bound to be OO clean, and heavy
>> >  computation need not look multiple method tables at each iteration, ok;
>> >  but if it means there is not a standard compulsory class hierarchy
>> >  from low level raw data classes to high level virtual classes for system
>> >  calls, I don't agree anymore !

>> Again -- you're forcing any users of the system to use an OO language.
>> I don't like that.
NO, I'm forcing the INTERFACE to be OOed; for computations and all that, you
may do as you please; C, assembler, fortran, whatever, there is no problem;
we could even develop compilers for these which would give public access to
non-OO variables with OO methods (for example, what inside compiled code is
an integer array will appear externally as an integer array object); thus,
you won't even have to think about interfacing; it will be done automatically
when asked.


>> If you want an OO only environment, I suggest you get hold of something like
>> Smalltalk, Actors or Self -- all of which are pure OO languages which provide
>> an OO based environment.

I DON'T want an oo ONLY environment, but an oo CAPABLE environment, where
independant apps can nevertheless interchange data without the poor user
having to write a data format translation program each time (provided he is
proficient enouh, he has sufficient documentation about coding and efficient
algorithms to decode, and tricks to implement those algorithms, etc).


>> >   That's a general method for protection, which is after all only a "view" of
>> > sharing objects. To enhance this, you may add a key to names so an aggressive
>> > program not be able to pirat you because you use common names.
>> That's what capabilities are.

Uh ... what are you calling capabilities ?



>> >  In fact, I think the Kernel Set should be exactly the C/ASM coded methods for
>> > handling low-level objects: tasks/thread/procedures (executable code in
>> > general, including stack management, and exception handling), memory
>> > allocation, object typing, virtual pointers and naming (including subnaming
>> > and polymorphism, linking and unlinking object to the running system (imagine
>> > the coding/decoding needed to load/save an object from the running session
>> > from/to mass memory). Nothing more.

>> Nothing MORE?!
>> I do not think that object typing, polymorphism etc. belong in the kernel.
>> Object typing should be done at compile time.
>> If you want an interprative system that can be written but you shouldn't
>> FORCE the system to do interpratation.

If we are one day able to have a standard interchange format for data&code, it
is compulsory to include it in base system (I'm not talking about the Kernel
only, but also the set of standard devices available). If we are to quickly
interchange code, we cannot afford compiling a HLL program each time; so a LLL
is good; if the LLL CAN be interpreted (but is also aimed at compiling), it
is all the better, as it saves great amount of time for HL routines.
(that's infinitely more efficient than, say, a shell&awk script). I'm not
trying to impose an interpretative only system. I'd hate that. But I'm
also pretending that writing the interpreter before the compiler is easier,
and we don't need lightning speed to first boot the system. When the system
is stable, we can begin compiling everything.



>> To summarise my comments:
>> 
>> The main problem is that you are confusing language and OS issues.

I don't think I'm CONFUSING them, but I sure am linking them.
I'd like the others to bring their viewpoint upon this critical point.
Support me, question me, criticize me, flame me; but please reply as
Michael did.


>> An OO operating system does NOT mean (IMHO) that all the support for OO
>> langauges should be built into the OS (and certainly not into the kernel)
>> 
>> To me it means that the OS should provide some operations that make it easy
>> to implement persistant objects.

What do you call "some operations", and "persistant objects". To me,
you have no OOed OS if every language is going to completely reimplement
their objects; you only have an OS with an annoying OOed interface (as
with the Mac for many aspects).


>> * You should allow the freedom to do things at compile time for efficiency
>> reasons.

I never thought otherwise.


>> * You should allow the use of non OO languages -- otherwise you don't have a
>> generic OS -- you have an OO programming environment.

I have the same answer, and I have spoken about that just before.


>> * The kernel should be minimal. It WILL have a resemblence to Unix in some areas
>> simply because in some areas Unix has done things in a good way.

yes, but it won't be a unix clone by just following unix philosophy.


>> Note that objects come ABOVE the kernel.

Above the KERNEL, I don't know. But IMO certainly not above the basic system
(kernel+device) set.


>> You don't want to have the kernel enforce complex semantical policies.
>> By lifting up such things into the compiiers you gain in efficiency and more
>> importantly in maintainability.

By having (simple) semantical policies enforced you can at last have a
transparent system where apps truely can communicate with each other
(and not only merge results as most "advanced" current systems only do).

>> Requiring the kernel to do polymorphic type checking and inheritance is like
>> requiring an OS to run structured code ...

Uh ?


>> Michael
>> winikoff@cs.mu.oz.au
>> 
	   ,
	Fare
	(rideau@clipper.ens.fr)
	200% time moose programming froggy
	(but I never ate any frog's leg)


----------------------------------------------------------------------------
P.S.
 By now, we should be able to have a decision summarizing process. I'd like
Dennis (or someone else) to be a referee for our major decisions, and keep
up to date vote reports on important questions.
 The first thing is to agree on the question's formulation. The moderator
will put the formula (or formulas) he and the main parties think fit the
best to define the problem. Then, he will list solutions proposed, even
silly ones, with unanimously agreed comments about these solutions. Then
will come a list of raw votes (including haven't voted or neutral lists).
Then will come each one's opinion. Finally, official current MOOSE position
will be stated. As a note, a list of messages detailing each ones' view on
the problem can follow.
 Well, something like this should be done; now, I certainly haven't done
the best choices. Do not hesitate to propose changes, in the form as in the
contents.

----------------------------------------------------------------------------
VOTE #1

PROBLEM:
 What's the OO part of the system to you ?

POSSSIBLE ANSWERS:
 1) Nothing. The OS simply shouldn't be OOed at all.
 2) The interface only. The OS is divided into object which you access
 through  methods. Programs do as they wish and we trust them to be OOed
 (that is, you can dream !). The interface will then only be a brighter
 but slower interface to a standard OS.
 3) Compile level standard. There will be a HL PL  standard; each time
 you add a new object class to  the system, you must recompile it and
 add a device driver. If there is a major incompatibility with previous
 classes, the whole set of incompatible classes will have to be compiled
 again. Code and data won't be able to be interchanged together. Objects
 will have to be expressed in the standard HL PL to run, so that there
 will be HLLs to the HLPL compilers as there are HLL to C compilers under
 unix.
 4) HLL standard with opaque implementation. Almost the same as before;
 but the objects are implemented at system level. However the way they
 are depends on system implementation and is opaque to normal
 user/programmer. Code and data will be able to be interchanged
 together, but on same kind of hosts only; you will have to recompile
 everything and translate data when you change hardware.
 5) LLL standard. Low-level here means near to compiling (interpreting)
 requirements, not that objects aren't accessible. Objects are
 accessible through standard LLL features. LLL can be quickly compiled
 or even more quickly interpreted if for a few times only. LLL has a
 standard format which allows code&data quick common interchange
 between different hardware supporting the OS. You don't need slow
 HLL to standard HLPL compilers, only easy to do HLL to LLL compilers;
 a good LLL compiler will then be a common back-end for every existing
 language. Developpers can concentrate on optimizing it. Equal
 efficiency at compiling and interpreting is achieved through
 language redundancies.

VOTES:
(numbers can be reallocated, as long as the whole file is updated)
 - 1)
 - 2)
 - 3)
 - 4)
 - 5) Fare' Rideau
 - not concerned)
 - haven't voted) everyone else !
	Andreas Arff
        Gary Duzan
        David Garfield
        Ross Hayden
        Dennis Marer
        Rob McKeever
        John Newlin
        Dan Odom
        Michael Winikoff
        Peter Mueller
	('hope nobody was forgotten)


OPINIONS:
 - Fare': To me, OOing should be what makes MOOSE fundamentally new and good.
 It should at last enable data & CODE interchange (as data is nothing without
 its caretaking code)
 
CURRENT MOOSE POSITION:
 Too few have expressed their opinion for a Moose position to be taken.
 Moreover, Dennis hasn't taken part into the debate yet.

MESSAGE POINTING:
 began in message ORG,KER+ far12
 Fare's opinion in far12
-----------------------------------------------------------------------------