Processes and Communication
Fri, 5 Mar 93 23:15:25 MET
(in reply to Michael Winikoff - with one ``n'' only)
>> > >> Some issues are sufficiently important that you can't abstract 'em.
>> > >> Each process having it's own address space is possible on all MMU equipede
>> > >> systems.
>> > MOOSE should be portable enough to be ported to other architectures than MMU
>> > based, be it older or newer (to be) architectures. Of course, on a non-MMU
>> > computer, you will be forced not to trust code not compiled (or interpreted)
>> > on local host).
>> This is a matter of specification -- we have to decide whether or not we
>> would like MOOSE to run on non-MMU machines.
>> I think not -- most (all?) new machines have MMUs and memory protection is
>> quite useful for a programmer.
>> (It;s also ESSENTIAL for a secure multi user system)
>> Any comments from anyone else?
(that's always the good'ole question: what is in spec's and what is in impl')
>> > >> I am still opposed to having the system based around an interpreter.
>> > >> Sure, it simplifies the solutions to some problems however the efficiency e
>> > >> cost is prohibitive.
>> > I don't think it is prohibitive at all, provided heavy calculus is compiled.
>> > IMHO, you don't need have optimized code to properly run the system: OF COURSE
>> > TIME-CRITICAL ROUTINES MUST BE COMPILED, either system or user routines; but
>> > these can be more easily compiled from low-level code; so we come up with a
>> > LLL standard: the system (and/or the user) chooses when to partly compile or
>> > interpret code. HL routines often gain nothing at compile and may loose
>> > efficiency and genericity, even speed (see my interpreting by RET, more rapid
>> > than any CALL sequence). Whereas interpreting is very cheap in memory,
>> > portable, easy for security, communication, extension. With little redundancy,
>> > LLL programs can be as easily compiled as interpreted, so there's no need to
>> > fear for a specialized code which would fit either compiling or interpretinge
>> > but not both. The interpreter is also a very useful tool for system booting.
>> > Once the system is stable, it's time more to compile. What I mean is we don't
>> > need that much speed NOW.
>> Mixing interpreting and compiling creates additional complexity.
Additional to what ? Do you think the unix system with its thousands languages
(C,sh,csh,awk,perl,ed,...,...), each having a totally separated (i.e. redone
entirely, with all subsequent redundancy) is simpler ? Not to talk about DOS
where no language is standard, only executable code.
Complicating a very little bit the base system, and adding at the same time a
useful debugging and/or everyday programming tool sounds sensible to me. What
do the others think ? What does Dennis think ?
>> I would like to think that this issue is orthogonal (ie. independant of)
>> the OS.
Again, if the OS brings but another interface to hardware to comply with, what's
the use for it: we already have plenty of common featured OSes, with plenty of
processes and IPC. We mustn't think system OO features as a diktat forcing you to
use OO language and programming; they should be a new freedom, not a new chains:
now programs should be able to interchange data AND code at the same time,
dynamically and without having to recompile everything to add just a single simple
quick routine. Programs are not bound to use system OO tehcnique (at least not
internally; standard run-time libraries will be provided for every existing language
to interface the system), but at last they CAN use them, and not have to reprogram
them entirely and run apart from the other app's you would have liked to share
>> The benefits of interpretation that are claimed are:
>> (1) Code size
>> (2) Flexibility
>> (3) Portability
>> (4) Security
>> (5) Communication
>> (6) Extension
>> I don't feel that code size is such an issue.
That's not an issue, but a pleasant extra !
>> Portability can be catered to at the language level -- I don't see any advantage
>> to providing portabiity by having different machines emulate a common abstract
THAT'S a language-level portability ! It should be low-level enough to be compiled
efficiently, high-level enough to allow Inter-Object-Communication. The impl'
should include required redundancy for quick interpretation.
>> Security is irrelevant -- as long as you allow the existance of compiled
>> code you have to provide memory protection to have security anyway.
Nope, if you're sure the compiler put the required checks (not too many checks
but just what is needed); that's why a compiled program must have been ensured
by the local host before execution (which ultimately sums up to a popup menu
to ask the user his opinion)
>> A side note, having a machine independant "assembly language" is a good idea.
>> The main problem is that we have to ensure that
>> (1) It is implementable efficiently on all machines we would be porting to
>> (2) It doesn't make impossible (or inefficient) operations which are supported
>> by the machines.
If the LLL was too low-level, it wouldn't be portable anyore.
>> Ie. if machine A has an efficient way of doing X which machine B doesn't have
>> then a global LLL will force us to ignore the efficient way of doing X.
On the opposite, the LLL would give a standard means to access the resource both
on A and B !
>> For instance if machine A supports array indexing but B doesn't then LLL can
>> either offer array indexing and be inefficient on B or not offer array indexing
>> and have the compiler for A having to recognise a sequence of instructions
>> as representing an array reference which can be efficiently compiled.
As it is easier and safer to translate high-level objects to low-level ones than
the converse, I'd say the LLL should manage HL objects
Well, make up your mind everybody and tell your opinion.
200% time mooser-programmer.