release 0.0.0.20 and thoughts
Sun, 13 Aug 1995 05:33:23 -0400 (EDT)
Sorry I took so long to respond, but the ISP here flaked out this week and
thus delayed my response. Anyway ...
On some network somewhere in cyberspace, Francois-Rene Rideau transmitted:
> > And as to verifing code to make sure it'll
> > do what it says it'll do I feel is an NP-complete problem, but that's just
> > me.
> It's worse than that: it's not even decidable !
> But why should that prevent us from supporting it ?
> That a program may run at all is undecidable. Will you design
> computers so that they won't support programs that run ????????
> Will you prevent compilers from removing *any* run-time check just because
> the general problem of deciding whether this is possible is undecidable ???
Well, there are certain cases where it's easy to check, but there comes a
point where it's just plain silly to decide certain attributes about a
program. Hypothetical case: The Ackerman Function. Given the following
(it's in C but it could easily be done in any language):
int ackerman(int m,int n)
if (m == 0)
else if (n == 0)
Can you, for instance, prove that will will return an answer for all
defined inputs (INT_MIN <= m <= INT_MAX and INT_MIN <= n <= INT_MAX)? Which
inputs are safe? Which aren't?
I know, I know. You can run it in the "Isolation Box (TM)", but this is
the type of thing I'm talking about.
> > There is a real concern over security, and if you have two people using a
> > single machine, you've cut your effective security in half.
> I'd rather say you double insecurity in a logarithmic scale. But sure,
> security *is* my main concern. And my point is low-level checks and paranoia
> can bring only inefficiency and insecurity. Only high-level proofs can bring
> you real security, and brings efficiency at the same time.
Can you present a high-level proof that the Ackerman Function works? The
definition is there in the code.
> > Same goes with
> > computers over a network (and I hope you realize that in these enlightened
> > days, the network IS the computer). And it's in a network based distributed
> > system where micro-kernels (well, designed around certain paradigms) really
> > shine. They also work better on multi-processor (MIMD) systems.
> Apart for microkernels (that completely suck), I completely agree.
Well, can you explain WHY a microkernel sucks?
> > Unless you can show me a way that cooperative multitasking will work
> > across a network, in a multiprocessor computer, or both.
> I fear I cannot demonstrate that unless you provide me the machine,
> plus food and shelter for the time I develop such software ;) But if
> you do, sure I will !
Can you describe HOW it possibly could be done? You don't exactly have to
get a working model, but some concrete example (or even pseudocode) would be
> >> Of course, at the time when computer memories were tiny, one couldn't
> >> afford complicated systems, and would prefer make it tiny, and let people
> >> hack their way with it. But this is no more tolerable on nowadays'
> >> computers,
> >> that can handle much more memory than what the user can input.
> > And it's that thinking, where memory is infinite, that lead to the current
> > state of systems, like Windows or modern Unix systems. You may find it hard
> > to believe, but I have, sitting next to me right now, a Unix based box with
> > only 512K physical RAM (probably a few meg total in virtual memory) with a
> > 15M harddrive. With a kernel size of 69.4K.
> You don't seem to get my point. I have never ever said that memory was
> plenty and could be wasted, or that each node could be assumed to have as
> much as possible.
> What I've been saying is that there is a qualitative leap in the amount
> of available memory. Fifty years ago, memory was just enough for tiny
> programs to exist. Meta-operations were hand-done by humans. Then, the
> computer could automate recording and restoring so humans needed retype
> programs. Then it could handle simple compilation so humans needn't
> hand-assemble programs.
Okay, that takes us to about 1955. The point I was making is that it
SHOULDN'T take megs and megs of RAM to do what is currently being done.
Yes, OO is neat. But does it REALLY require 4M of RAM?
> Now, the amount of memory in a computer is more
> than what the user in front of it can produce during his whole life.
> The day you fill a 1GB byte harddisk with just *original* non-redundant,
> meaningful hand-written personal software, call me, so I buy you a new one.
> Whole encyclopaedias fit a CD-ROM.
Well, I'd have to check, but I'm sure I could fill a few hundred K or so.
And if you allow the inclusing of not only hand-written personal software,
but documents (I was a humor columnist a few years ago and did most of my
writing on the computer (you might want to check out
'http://pineal.math.fau.edu/~spc/' -- the colums are there somewhere)),
papers, artwork etc.
I know of one artist who's latest work took over 8M of storage space.
Granted it was a rather large poster, but still, that should count.
And I'm not saying that we should go back to the days of 64K, or even 16K,
4K or even 1K. What I'm saying is that there's a bloat to todays code that
is incredible, and the smaller it is, the more memory left over to
manipulate data, and THAT'S what the computer is there for, to manipulate
> So yes, there is some qualitative change in the amount of memory available
> that system software should take into account. This is not a call to waste
> available memory, but on the contrary, to use it efficiently. The waste is
> the current design based on files and ad-hoc abstractions.
What's so wasteful about files? I tend to find them quite useful, and
until something else comes along, I probably won't abandon them anytime
> >> The problem is *not* only the registerwise information state,
> >> but also and
> >> mostly all the rest, semaphores, etc, things that make a shared GCed
> >> address space impossible
> > Count me stupid, but I sure don't see how you have to save semaphores as
> > part of "state" information of a task. Sure, semaphores are owned by tasks,
> > but I've yet to come across an implimentation were they had to be "saved"
> > across context switches.
> You don't seem to get that the way context switches occur is a very
> important point in the system structure, that considerably affects
> synchronization et al. So tell me, how will you do inter-thread GC or
> any synchronization without trusting threads at all ? How will you do that
> without any cooperation ? How in a non-informative preemptive scheduler
> will you be able to tell what is a pointer from what isn't, and what pointed
> object starts where, as each thread may have been interrupted anywhere ?
Well, I'm a bit weak in the GC area (and therefore, don't fully trust it),
but I'm beginning to wonder if you understand how a context switch occures.
I'll try to answer your questions one at a time:
1) "I don't understand the way context switches occur ... "
I think I do. An execution unit (thread/task/process/thingamabob) runs
within a CPU context (which includes the registers) and a memory space
context (that controls what memory it does own). A context switch from one
execution unit to another involves a change in CPU context (well, duh!) and
MAY involve a change in memory space context (which in most architectures is
changing a register, either in the CPU or in hardware). Information about
that execution unit (name, owner, priviledges, priority, etc) is stored
somewhere in the memory space context unit (usually, there's a structure
(or if you prefer, an object) that describes this data), but there usually
is one structure per execution unit (so that if two execution units share
the same memory space context, there are two structures in this shared unit
describing each execution unit and availble to the execution unit and
system). This structure is under control of the system, so the system knows
about all execution units. Details of this are left to the implimentation.
2) "How do I do synchronization without trusting the execution units?"
Either through message passing or semaphores. An execution unit just can't
grab what it wants (else if it tries, it dies since it doesn't have
permission to and this can be done in hardware. You may call it slow, but I
don't). So I either send a message to an execution unit that controls what
I need requesting use of said object) or I try to own it via a semaphore.
Now, sending a message or attempting to gain a semaphore are forms of
yield(), in that the calling execution unit is blocked until the condition
is met (and in my Amiga example, that's why 16 tasks can be in the system
without taking CPU time at all, because they're all blocked).
I never said there couldn't be a voluntary yeild() call.
3) "How do I do inter-thread GC?" There are two terms, inter and intra, I
can never remember which one is what, so I'll assume you mean "GC across two
execution units that may share the same memory space context that needs to
be GCed". Well, the GC would block the two execution units from running,
and do the GC. I assume that the two execution units do cooperate.
Preemtive multitasking doesn't prevent two execution units from cooperating.
No, really, it doesn't.
4) "How in a non-informative preemtive scheduler will you be able to tell
what is a pointer from what isn't?" Why does the scheduler need to know?
All the scheduler needs is a pointer to the execution unit information,
which the system tracks. Am I missing something?
You seem to assume that in a preemptive multitasking system, each
execution unit is running unaware of any other execution unit, or all
fighting each other. This is not the case. Yes, I can write a program with
out having to worry about other execution units running, but I can just as
easily write a program where three execution units work closely with each
> You were talking about not wasting memory. Now what do you call that
> having to reserve separate stacks, heaps, etc, for each thread, just because
> you didn't want threads to cooperate ?
Different execution units can share the same heap (heck, the default
memory allocation for every task on the Amiga comes from a single heap), but
if you can show me how two execution units can share the same stack, then
I'll print out that message and eat it.
> >> In a cooperatively informative system,
> >> synchronization is enforced at compile time when possible, or with the
> >> help of compile-time calling conventions.
> > I fail to see how that helps. I keep thinking of starving philosophers
> > myself.
> So imagine the multithreaded GC again. Instead of letting your
> non-informative task preempt anywhere, and/or refuse it preempt for
> long periods, you can enforce at compile time that it will both preempt
> only a valid point, and preempt often enough. Similarly, you can have
> very efficient scheduling by letting subthreads subschedule directly,
> by registering the schedule policy together with the readjust routine
> (both are included in a CPS routine) so that you have synchronization
> and efficiency at the same time.
> >> Why forbid assembly ? Sure forbid unchecked assembly to novices,
> >> or at least not outside of a paranoid isolation/emulation box.
> > But this goes against that TUNES stands for. Freedom of programming and
> > all that, unless I'm missing something again 8-)
> Freedom of programming is not freedom to run any binary. Writing dumb
> assembly code is no more programming than renaming a .gif or .mod file
> to .com; and running such code should not be allowed anymore than running
> such files. Freedom has never been freedom to do anything.
No, freedom also means accepting the responsibility of your actions. If
I'm dumb enough to rename wetwild.gif to copy.exe and attempt to run it only
to have my computer hang (or dump core or say "can't run program") then it's
my fault. And if I want to write a stupid program that ends up trashing my
file system, that's my problem.
Granted, you may not choose to run that program on YOUR computer, but
that's your freedom. And you can set it up so that I can't write such
programs on YOUR computer (your free to do that).
Please, don't take away my freedom to do stupid things to my computer. I
thought that's what TUNES was about. And if I want preemtive multitasking,
I should be able to add it, no? But here you are, dictating what YOU thing
everybody should want.
> >> But
> >> that's not worse than what exists on current systems. The problem with
> >> those systems is that you *cannot* use assembly at all or anything to
> >> extend the system itself.
> > I've yet to come across a system were I can't use assembly. Even on Unix,
> > the assembler (usually called 'as') is available for anyone to use (assuming
> > the C compiler is available for anyone to use).
> You don't think to get my point again. In unix you don't extend the
> system at all, be it in assembly or anything. You run code in paranoidly
> isolated tasks. Well there are still e.g. linux' loadable modules,
> but this cancels *any* security.
How? Under Linux, only root has the security to load a module. Under
QNX, you can set the permissions such that only certain users are allowed to
expand the system. So, what exactly, is insecure about the Linux loadable
> > Not that I'm trying to sell QNX here. Just pointing out there are systems
> > out there that allow to extend the system itself.
> None allow extension to be both efficient and secure.
Care to back this statement up? I've yet to buy anything you've said so
> >> Just *everything* is done in those damn paranoid
> >> inefficient coarse-grained "processes".
> > Well, if you're talking about Unix, then yes, the unit of execution is
> > quite coarse-grained. But most OSes written after 1980 have the concept of
> > thread (or task), which has less context than a process.
> And then these threads have no security at all, and completely
> horrible semantics. Are you advocating that we should use such completely
> unsecure clumsy stuff ? Don't you see threads were invented to bypass the
> OS because the OS just got in the way ?
In what way are "threads" unsecure? What are the "horrible semantics"?
And "threads" were developed not to bypass the OS (since the OS still
controls them), but to alleviate the overhead of IPC between closely coupled
tasks (read: so that tasks can share a common memory space). If the tasks
in a process screwed up, then only those tasks in that process are dumped.
Not the whole system.
> > I don't see how you can make yield() be a sti()cli() pair.
> Well, maybe we've got mixed up: what I mean was the routine to
> possibly yield execution if the timeslice is over, which may be rather
> called pause(). Does that solve your problem ?
Partially. I don't feel you've argued sucessfully my point that
cooperative multitasking is more expensive than preemtive.
> > And what is an
> > "interrupted-position-dependent routine"? And if it's what I think it is,
> > how does the system "know" a routine is "interrupted-position-dependend"? I
> > thought the whole idea of interrupts was that the interrupted routine is
> > unaware of being interrupted.
> Let's imagine this code for the 68K, where address registers are to
> be relocated, while data registers are not.
> subl %a1,%a0
> movl %a0,%d0
> Well, then if an interrupt happens between the two instructions, then
> surely a0 should not be relocated, so a code position dependent adjustement
> routine will have to be run before processing could go on.
By "relocated" I assume you mean "reloaded with a new value".
You either are reloading the address registers via the interrupt handler
routine, or you are reloading a1 with a0, and a0 with d0 (which order is it?
That's not proper Motorola syntax) and the interrupt handler routine is not
bothering to save any registers it uses.
Number one is insane (unless it's part of a contect switch, in which case
the state of the CPU is saved, then restored to another context) and number
two is insane. The interrupt handler has to save any registers it munges so
that when it returns, things don't blow up (thus causing hard to find bugs).
Unless you come up with a clearer example, I'm going to have to consider
this the work of a madman. Sorry.
> >> still I'm convinced that
> >> cooperatively informative systems are the most reliable way to do things.
> > Reliable, maybe. Fast. No.
> Fast yes. Again, you fail to consider GC and persistency.
Could you please elaborate?
> > And when I pause a moment to gather my thoughts in replying, the e engine
> > will get the CPU, and less overhead (in general) will be lost during the
> > rescheduling (say I pause for a minute) than in the e engine having to
> > yield() every so often.
> Then what ? This is perfectly possible with my cooperative thingy.
> You do position-dependent adjustement, the adjustement being nothing in
> the main loop that computes. Really trivial example.
I helped a friend of mine write a Windows program to calculate a chaotic
attractor, and because of the "cooperative" nature of Windows, we had to
seriously mung the code I had working on an SGI (which is preemtive BTW) so
that the calculations could be done piece meal. Not fun at all, and took us
longer to get the code running cooperatively than it did to write the ENTIRE
program on the SGI in the first place.
Is this what you mean? Where I have to recode any lengthy CPU intensive
application so that when it get's called it only does a miniscule amount of
work? In a word: Sorry, ain't gonna happen.
Remind me to tell you about the math simulation that ran on my Unix
workstation for over a year. And if the entire machine was dedicated to the
problem, it still would have taken about 10 months to do. So it wasn't the
preemptive nature that sucked up the time.
> >> The interrupt takes place in both systems. The difference is not there.
> >> The difference is address space, page table, stack, and process info
> >> swapping;
> > Well, on the 386 (assuming you are using paging and not the segments at
> > all) just reloading CR3 remaps the address space, ESP readjusts the stack.
> > All that's really left is the rest of the general registers (a popd will
> > work wonders here) and that's about it. I've also checked the VAX
> > architecture. Pretty much the same thing (just reloading a few certain
> > registers).
> You forget to talk about all the cache miss penalty that this swapping
> incur. And believe me, task switch has a severe cache miss penalty. Plus
> it consumes up a lot of memory per task, thus limiting the multitasking and
> involving a lot of swapping and memory waste. Not to talk about the above
> impossibility of reliable synchronization for GC and persistency.
Jumping half way across the memory space will cause severe cache misses.
Accessing memory purely randomly will cause severe cache misses. What's
your point? So okay, you have five cooperative processes in memory,
presumedly in the same memory address space. Are you telling me that
jumping from one "process" to another isn't going to induce severe cache
And prove that it's impossible to have reliable synchronization for GC and
persistency, given that execution units CAN cooperate in a preemptive
> >> the problem is synchronization for garbage collection and/or checkpointing.
> >> This is several hundreds of cycles (to the least) at each context switch,
> >> while a cooperative way would cost only tens of cycles, so we already gain
> >> at least one order of magnitude in such overheads.
> > I still don't see this.
> If you don't synchronize, you can't share heaps and stacks, so you lose
> far more time swapping stacks and heaps in and out of memory than you
> saved by not synchronizing. You just can't do good GC, either, though you can
> use "conservative" algorithms, but then you lose long term persistency.
> And if you don't synchronize, you cannot do checkpointing, and have a
> resilient persistent memory.
> Resilient long-term persistency are requirements for TUNES (even though
> it is intended to support stripped down versions for specific applications).
See some of my arguments above about synchronization, GC, heap sharing and
> >>>> Because the function is not actually called !
> >>>> Instead, code is modified at
> >>>> some point by the interrupt-driven routine that decides of a schedule...
> >>> This is better than preemtive? It sounds preemtive to me, and while it
> >>> IS
> >>> possible, it sounds like more work than just having the CPU save the state
> >>> and move on to another task.
> >> You can call it preemptive. And it does save time as compared to stubborn
> >> state save in that you can customze it to feet your needs.
> > I'm sorry, but unless I see actual code, I'm not buying this argument.
> Information *is* needed anyway as I showed earlier. My method shows how
> you lose nothing in using it as compared to your usual preemptive code,
> but you gain synchronization. And yes, even then, adjusting thread state before
> to save it costs less than saving the whole machine state.
I haven't SEEN anything that convinces me.
> [About a reflective macro system]
> > Here's a hint: Forth.
> Thanks you, I know it for years. It ain't no hint, because (available)
> Forth ain't precisely the right system to produce asm code for various
> machines (including C), as is needed for TUNES. Actually, there is one
> FORTH system that could help: Macro4th. Well, if you can set it up for
> me so that it be used as a preprocessor to C and various assembly
> languages, I buy it. Moreover, a Forth-like core is precisely what is
> intended for the Tunes LLL...
What asm code are you trying to reach? I'm at a loss here ...
> > And yes, I've written a Turing Machine. In BASIC, lo these many years
> > ago.
> I'm asking if you've written programs *for* a Turing Machine, which
> is quite different. E.g. Tetris, or even a poor little multiplier. If
> you have, then I expect you to understand what high-level programming
> brings over low-level programming, particularly if you want efficient
> code and/or reasonable development time.
Well, I wrote a Turing Machine. How did I know it worked?
And yes, I'm quite aware of what high level programming brings over low
level programming. I've been programming in Assembly since 1985, Pascal
since 86, C since 90. Thank you very much.
> >> Global conventions=a one fixed convention for all routines. e.g. C argument
> >> passing.
> > I would think using a single calling convention would speed things up, if
> > only to avoid the gyrations needed to massage the parameters around (or glue
> > code).
> Uh ? Look at assembly generated by C compilers, and see who's doing
> a lot of gyrations and parameter massaging. Procedure-specific calling
> conventions could be very easy to implement for the compiler, and would
> reduce all that overhead.
Check out Watcom.
-spc (Will play Devil's Advocate for food)