release 0.0.0.20 and thoughts
Francois-Rene Rideau
rideau@clipper.ens.fr
Tue, 8 Aug 95 17:45:59 MET DST
> I'll get to what you wrote to in a bit, but first, I do want to discuss
> the multitasking model you have in mind. I know that you want cooperative
> and think that preemptive is either evil or lame (or both, I've forgotten),
> but I'm here to show you that preemtive is not quite as lame or evil as you
> think.
I don't think that preemptive is lame or evil per se. Actually, if you
followed the discussion between Billy and I (and other people) on the list,
you could see that he showed how my proposal could very well be called
preemptive multithreading, too !
What I think and prove is lame and evil is the no-information paradigm
from traditional system design, by which no information is statically or
dynamically assumed or enforced about running threads, leading to a paranoid
protection mechanism that is inefficient in time space and security alike.
Of course, at the time when computer memories were tiny, one couldn't
afford complicated systems, and would prefer make it tiny, and let people
hack their way with it. But this is no more tolerable on nowadays' computers,
that can handle much more memory than what the user can input.
> One reason you state that preemptive is bad is because of the high
> overhead of saving state information, and that with cooperative, there isn't
> nearly has much overhead. This is true.
The problem is *not* only the registerwise information state, but also and
mostly all the rest, semaphores, etc, things that make a shared GCed
address space impossible (remember that Tunes must be resiliant and
persistent hence supporting garbage collection, even if this could be disabled
on specific statically linked applications). In a non-informative system,
cost to resynchronize is wondrous. In a cooperatively informative system,
synchronization is enforced at compile time when possible, or with the
help of compile-time calling conventions.
> But the other side of the coin is that in a cooperative system, a single
> task can hog the entire CPU without reliquishing control. Your way around
> this is to forbid the use of Assembly (which I think is a bad idea, but I'll
> get to that in a bit) and use a compiler, which will "insert" yield calls
> periodically.
Why forbid assembly ? Sure forbid unchecked assembly to novices,
or at least not outside of a paranoid isolation/emulation box. But
that's not worse than what exists on current systems. The problem with
those systems is that you *cannot* use assembly at all or anything to
extend the system itself. Just *everything* is done in those damn paranoid
inefficient coarse-grained "processes".
Well, asm programmers will have to follow some constraints (which could
well be automatically/programmably enforced by assembly development systems),
so that cooperation is respected. So what ? If they're not able to do that,
then they shouldn't be extending the system with assembly (and they still can
test their code in some isolation box before they patch the "real" system).
Also see how yield could be just a sti() cli() pair, or simply having
a way for the interrupt routine to find a interrupted-position-dependent
routine to do the interrupt cleanly. So even though I quite agree that
a naive stubborn cooperative implementation, or even a slightly enhanced
CALL/RET based implementation, is bad unless we have a way for the overall
software to pause often enough but not too often, still I'm convinced that
cooperatively informative systems are the most reliable way to do things.
The interrupt takes place in both systems. The difference is not there.
The difference is address space, page table, stack, and process info swapping;
the problem is synchronization for garbage collection and/or checkpointing.
This is several hundreds of cycles (to the least) at each context switch,
while a cooperative way would cost only tens of cycles, so we already gain
at least one order of magnitude in such overheads.
>> Because the function is not actually called !
>> Instead, code is modified at
> > some point by the interrupt-driven routine that decides of a schedule...
>
> This is better than preemtive? It sounds preemtive to me, and while it IS
> possible, it sounds like more work than just having the CPU save the state
> and move on to another task.
You can call it preemptive. And it does save time as compared to stubborn
state save in that you can customze it to feet your needs.
> What, exactly, do you want?
I want a system that can take advantage of dynamycally available
information.
>> Firstly, doing low-level code is very difficult and annoying, because the
>> hardware is complex and not well documented, that is, it has evil semantics.
> Please explain this. What is evil about hardware semantics? Hardware,
> unfortunately, has to deal with the real world. Which means that the
> software that deals with the hardware has to deal with the real world.
> Granted, some of it isn't easy, but to call it evil is a bit harsh I think.
It is not its being "hardware", but its being uselessly complex (because of
compatibility with score years old technology) and desperately badly
documented (because of lack of compatibility ;) that makes PC hardware
programming a pain. Also the lack of clean interfaces like the AmigaDOS,
RISCOS or even MacOS systems have.
>>> The only reasonable way to develop is at abstract over the hardware to
>>> obtain objects with better, *cleaner semantics* (abstracting without
>>> cleaning the semantics, as in C, is lame); else you get too many bugs.
>> Could you explain this as well? I'm lost now.
Well, setting such bit to spinlock, wait for the 8052 to answer, then
set a bit in such io register, here are weird semantics for enabling some
hardware feature (enabling line A20) itself very had to express cleanly.
Not to talk about programming the floppy or harddisk processors. More
generally, what people are attached to is the semantics of an operation,
what it means or corresponds to.
>> I wrote
>> *lots* of macros, that made development possible. I also saw how no good
>> macro language exists. CPP is pure shit; I chose m4, but it has
>> evil reflectivity semantics (I had to systematically remove reflectivity
>> from my code, but for some trivial macros).
> I'm sorry, but I still don't quite understand reflectivity. Could you
> please post some example of what you were trying to do but couldn't? Maybe
> then I can understand it.
I could not define functions that defined new functions (except for
trivial ones), because the binding mechanism with insert-arguments-immediately
and evaluate-macros-at-the-last-moment, with horrible quoting, stubborn
syntax, lack of local contexts (that I emulate with CPS style macros), etc.
The macro system being Turing-equivalent (btw, have you ever programmed a
Turing machine ? ;), there's nothing that you really "cannot do". The problem
is the way you have to do it. The ugly m4 sources from the tunes src
distribution talks by itself.
> Maybe it's just me, but I found that MASM is one of the better macro
> processors around (that just happens to be inside an assembler). m4 may be
> powerful, but I've had to fight it enough times that I don't really like it,
> and I feel that MASM is much better than m4 (no, really).
Well, I found the MASM/TASM macro system just horrible to use (string
manipulation is just so difficult to obtain it's no go).
> Are you SURE you don't want DWIM? (Do What I Mean) No matter at what
> level you are at, there IS a calling convention. Now, compilers can help
> with the "calling convention", but there still is one.
I'm sure of what I want. There *is* a calling convention, but once defined,
it is always better handled by the computer than by the human. Existing
compilers don't allow customizable calling conventions, and assemblers
don't allow automatic use/check of those conventions, and by using stubborn
macros, you lose almost all the advantages of the assembler, unless
you write an optimizer with the macro system (but seeing how the macro
system is ugly, only a madman would do this).
>> The work-around for using languages that lack abstraction power,
>> is that calling conventions should always be particularly well documented,
>> and great caution is taken when invoking a function; for that reason also,
>> people often choose arbitrary global conventions that reduce performance
>> greatly to reduce the amount of bugs.
>
> Really? I tend to avoid global variables as much as possible myself.
Global conventions=a one fixed convention for all routines. e.g. C argument
passing.
>> All that is stupid. A language just
>> *should* have enough abstraction power so that one could define arbitrary
>> conventions or meta-conventions, and let the computer do all the dirty
>> work (programmable calling conventions).
> I would disagree. Maintaining code is bad enough with static calling
> conventions. Letting programmers working on different aspects of a program
> to pick their own calling convention is suicide. One might pick left to
> right evaluation, another right to left and a third prefix. In the same
> language!
It is suicide *with existing tools*, much like self-modifying code,
or any hack. Because the tools are not expressive enough to understand
and enforce automatically the semantics of these hacks.
>> I will be satisfied of the HLL compiler only the day when I can achieve
>> better and more reliable code than I can currently do with hand-coded
>> assembly, and in much less time, while having a high-level prototype
>> immediately;
> Have you considered using CASE?
The HLL should be some kind of CASE. But semantics-based, not purely
heuristics-based.
>> and this can be done easily by programmably combining
>> higher-order code transformations on high-level code, and generic
>> meta-implementation operator that map high-level objects to low-level
>> implementations (which is the generalization of calling convention),
>> with a simply customizable syntax, so I can adapt the input tool to the
>> input data. *This* is the trend where we should go.
> But who writes the actual CODE?
High-Level code is as actual as low-level code. Of course some
low-level code must be written before the HL code works. What I
propose is that people can dynamically propose new generic LL
implementations for HL concepts.
For example, basic HL abstractions would have just functions,
inductively defined abstract types, and quotient types. Then,
you could define arrays as a particular implementation for
functions on a small finite set, that has such kind of costs
for such operations, etc. You could hve very generic HL
concepts, then define specific LL implmentations that provide
various performance in various validity contexts.
-- , , _ v ~ ^ --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
-- ' / . --
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System
WWW page at URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"