release 0.0.0.20 and thoughts
Mon, 21 Aug 1995 20:19:14 -0700 (PDT)
On Sat, 19 Aug 1995 firstname.lastname@example.org wrote:
> Billy Tanksley:
> Yes, prediction is hard. It is nonetheless worthwhile to do in
> most cases, and in those where it isn't you can find that out with
> 95% certainty 95% of the time. You just have to know when to stop
> You're still going to have to have a dynamic mechanism underneath to
> deal with that remaining 0.25% chance. Of course, if you don't have
Absolutely not. You'll have to have a mechanism _available_. The
difference is that you won't have to be running it, and even when you are
running it you don't have to apply it to the whole system (YMMV).
> Now, if we're going to support multiple languages, you're going to
> have to do a lot of language-specific stuff to deal with prediction.
Correct. I would imagine that much of it could go into the linker.
> Maybe you'll have some core set of prediction mechanisms which can be
> shared across languages. Anyways, this all sounds like language
> implementation rather than platform implementation. Platform
> implementation involves providing the mechanisms to deal with that
> remaining 0.25% chance... There's nothing about prediction that's
> specific to any platform.
We don't want to define our OS in a way that's platform-specific! That's
the LAST thing we want!
> Wouldn't it be useful to have the OS be able to tell you which
> parts are critical, and PROVE it? That's abstraction.
> You're still going to have to deal with that 0.25% chance, our your
> program is going to crash with a probability of 1-0.9975^n where n is
> the number of time units required to run the program.
Ah, but in the majority of cases you can choose to NOT deal with it (by
proving that you don't have to beforehand). There are specific programs
that will require that system service 100% of the time (so proven), and
there are others that MIGHT require that service at a certain time during
their execution, but if the need it they will need it quick and with no
All OSes that aren't capable of proof will keep all services running at
all times, simply because the need to run them can't be predicted. TUNES
will only load them when needed.
> Now, given that the OS is the platform-specific stuff to keep your
> program from crashing, why would tack a proof mechanism onto the OS?
> It seems to me that this should be factored out as
> platform-independent code. Put it in one of those dynamically linked
> runtime systems.
TUNES could be described as a dynamically linked runtime system.
> Or, maybe I should be using the word "Kernel" in place of "OS", and
> maybe I should use the term "OS" to include things like linkers,
> interpreters, compilers, and editors?
The kernel is part of the OS; there are also system routines that the
kernel doesn't include.
> Both. Although I believe (I was only convinced on this a short
> while ago, though) that coop is the best system for the OS.
> Both is going to be harder than either individually.
True. And doing nothing is easier still.
The question is, are the results going to be worth it?
> Starving philosophers is a parable:
> "Five philosophers are sitting around a circular table, eating
> "rice. Each has his own bowl. There are five chopsticks at the
> "table. Each philosopher can eat only when he holds two
> Now, the problem is to come up with some sort of agreement that
> prevents any of the philosophers from starving to death. Mechanisms
> where a philosopher holds on to his chipstick(s) till he's full can
> result in starvation. This is analogous to cooperative multi-tasking,
> where programs keep control of the system till they're programmed to
> give it up.
No. That is analogous to a worst case in coop.
This worst case can (in most cases) be designed an proven to never
occur. In cases where the proof fails, you need to use preemption--
perhaps only once in the whole program.
> Of course, in real life, philosophers tend not to be quite this
> stupid. For example, if a philosopher was sitting at a table for
> several days, unable to eat because his table companions weren't
> sharing properly, he'd ask (perhaps politely, perhaps angrily) to be
> allowed to eat. This is pre-emption.
No. Preemption (as all OSes that I know of implement) is when God (who
may or may not exist, according to whatever philosopy is being discussed)
reaches down, takes the chopsticks from the current philosopher, and
gives it to some other.
Some systems include fancy features, such as giving it to the most
Preemptive multitasking, however, is very different. It's as though God
did that preemption 60 times a second, and magically suspended the
spaghetti thus orphaned in the air, to be placed back on the chopsticks
when the philosopher again resumed control.
The analogy here is to CPU state.
If we only did preemption when *necessary* it'd work just fine, woudn't it?
> Also, note that some solutions to this problem involves changing the
> hardware... For example, make six chopsticks available and let
> chopsticks be passed around the table (some variants of this problem
But this requires the same controls for each processor, so it needn't be
considered as a seperate process.
> A race condition is where the result of a computation depends on when
> the computation occurred. A bad race condition is one that is
> undesirable. For example, consider a file system that only worked
> properly 99.75% of the time -- that remaining 0.25% is going to have
> to be dealt with somehow.
Correction. A component of the file system. The rest of the system
could be used without precaution, but that component would have to be
guarded, possibly even at all times.