From alangrimes@starpower.net Sat, 31 Jul 1999 18:54:42 -0700 Date: Sat, 31 Jul 1999 18:54:42 -0700 From: Alan Grimes alangrimes@starpower.net Subject: so-called Turing-Equivalence Yes there are two classes of turing machines A> Turing machines that are task specific. B> Universal Turing machines. The latter is what you really want. =) WARNING: THERE HAVE BEEN SOME DRASTIC CHANGES TO THE LIST!!! you have to edit the hell out of your to line in your response... =\ -- The only company more evil than Microsoft is SAMS publishing. The former publishes parts of their books. (hidden APIs) The latter doesn't publish their books at all. =( Case in point: ISBN 0672306557 users.erols.com/alangrimes/ From I+fare+WANT@tunes.NO.org.SPAM 01 Aug 1999 00:46:32 +0200 Date: 01 Aug 1999 00:46:32 +0200 From: Francois-Rene Rideau I+fare+WANT@tunes.NO.org.SPAM Subject: so-called Turing-Equivalence >: Tim Bradshaw on comp.lang.lisp >>> Another formalism of equivalent power to a UTM is the lambda calculus >>> Interestingly this formalism gave rise to an obscure and little-used >>> family of programming languages, characterised mostly by their extreme >>> slowness, unsuitability for any practical task, and the requirement >>> for special hardware to run them with any degree of efficiency. >> I'm _sick_ of hearing such a meaningless statement >> of "being of equivalent power to a UTM" repeated over and over again >> as a bad excuse for bad programming language design. > I think you may be getting confused. My article was in a thread > asking a rather obscure and largely theoretical question about > structure editors. It wasn't about programming language design. But as your full quote shows, you specifically used "Turing-equivalence" as a concept meant to be relevant to language design, and as your full post shows (not repeated here), as a concept meant to be relevant to the "expressive power" of programming languages. > You may also perhaps have missed the fact that this last little aside > to which you seem to have taken such offence was a joke. I haven't missed the intended light tone of the remark; I haven't missed either the implied acceptance of the widely replicated but profoundly erroneous truism, that Turing-equivalence is 1) applicable _at all_ to tell anything about programming systems _with I/O_ 2) the best criterion that Computer Science can say about such systems (with the implication that we must resort to superstition to do better) "To argue that gaps in knowledge which will confront the seeker must be filled, not by patient inquiry, but by intuition or revelation, is simply to give ignorance a gratuitous and preposterous dignity...." -- H. L. Mencken, 1930 >> I question the meaningfulness of your term "Turing-equivalence". >> What definition of it do you use, if any? > > I think the definition in Boolos & Jeffrey `Computability and Logic' > is as good as any. That is, _not good at all_, considering programming of tasks _with I/O_. > If I recall, they give quite a nice exposition of > the various equivalent notions there (perhaps not talking about lambda > calculus though). Exposition that you seemingly didn't quite grok. Equivalence is _always_ "up to" some transformations. If you don't understand what these transformations are about, when and where they do apply or not, then you don't understand what the equivalence is about. [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] The limit between philosophy and politics is when you have to choose your friends. -- adapted from Karl Schmidt From I+fare+WANT@tunes.NO.org.SPAM 01 Aug 1999 00:59:27 +0200 Date: 01 Aug 1999 00:59:27 +0200 From: I+fare+WANT@tunes.NO.org.SPAM I+fare+WANT@tunes.NO.org.SPAM Subject: so-called Turing-Equivalence >: Stig Hemmer on comp.lang.lisp > Turing-equivalent == being able to solve [exactly] the same problems > as a Turing Machine. A completely pointless definition, since it forgets the most important point about equivalence, which is "up to what transformations?". If you accept no transformation at all, then trivially, only Turing Machines are equivalent to Turing Machines. If you allow "any" transformation, then as trivially, everything is Turing-equivalent, including your neighbour's mother's socks. > A Turning Machine has infinite memory, which means that no physical > machine can be Turing-equivalent. So, this is a purely theoretical > concept. A "concept" has infinitely precise extent, which means that no physical phenomenon can be a concept. So yours is a purely theoretical statement. >> Do these transformation correspond to anything _remotely_ meaningful >> *in presence of interactions with the external world (including users)*? > I believe it can be made meaningful(well-defined), though trying to > give an exact definition would be beyond the scope of this article. Trying to give an exact definition would be _quite_ within the scope of the current discussion. Oh, certainly, you could contort some meaningful definition (and there is not a One True One) of Turing-equivalence among pure systems to kind of work on systems with external interactions, but you would end up with a completely irrelevant contorted concept. > It is, however, not very interesting. You miss the whole point. > The problem isn't the external world really. It is rather that > Turing-equivalence say nothing about > - ease of programming. A big issue in the real world. > - execution speed. Another big issue in some contexts. This "real" world of yours is but the "external" world, to the computing system. And speed can very well be formalized within an interaction framework, too. So that 1) The problem IS about interactions with the external world 2) Since (badly taught) concepts of Turing-equivalence are _conspicuously_ not applicable to systems with I/O, any claim that they do not provide data about programming systems with I/O shows deep inadequacy not of the many concepts of Turing-equivalence in general, but rather of the bogus understanding of people about these concepts. [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] The degree of civilization in a society can be judged by entering its prisons. -- F. Dostoyevski From I+fare+WANT@tunes.NO.org.SPAM 01 Aug 1999 01:13:41 +0200 Date: 01 Aug 1999 01:13:41 +0200 From: I+fare+WANT@tunes.NO.org.SPAM I+fare+WANT@tunes.NO.org.SPAM Subject: so-called Turing-Equivalence >: Vassil Nikolov in comp.lang.lisp >> *in presence of interactions with the external world (including users)*? > > I am somewhat curious why you mention interactions. One essential > features of algorithmic machines such as a Turing machine is that > there is no interaction with the external world as far as the > execution of the algorithm is concerned (supplying the initial data > (the initial contents of the tape) and collecting the result (the > final contents of the tape) is outside the scope of theory of > algorithms). Therefore I don't see any meaning in relating > Turing-equivalence to interactions with the external world. I'm quite glad to see someone remember what Turing machines and Turing equivalence are all about. One of my points is _precisely_ that claims that "being Turing-equivalent is not enough for a programming language", or that "(Theoretical) Computer Science does not teach us anything about language design" are utter nonsense; for Turing-equivalence conspicuously does NOT apply to programming languages with any I/O, and it is certainly not the end-all of what theoretical computer science has to say about language design. And another point was that people use the term "Turing-equivalence" without understanding at all what it was about (never mind the fact that there even in common use within Algorithmic Information Theory, there are _many_ _different_ concepts of "Universal" machines). Optimae Salutationes (?), [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] There is no excuse for a programming language without clean formal semantics. For even if your language is not designed for formal reasoning by computers, there will still be humans who'll have to reason about programs. From I+fare+WANT@tunes.NO.org.SPAM 04 Aug 1999 00:06:21 +0200 Date: 04 Aug 1999 00:06:21 +0200 From: I+fare+WANT@tunes.NO.org.SPAM I+fare+WANT@tunes.NO.org.SPAM Subject: Non-linear continuations considered harmful Shriram Krishnamurthi writes: > You are right to point out that adding continuations to a language > messes with the standard model of compilation that many users might > have in their heads. dynamic-wind has this effect also. > But when you do catch your breath, you may want to look at Dybvig, Heib and > Bruggeman's PLDI 1990 paper on implementing call/cc (as well as > Clinger, Hartheimer and Ost's 1988 paper on the same topic). Thanks for the tip! > The > reality isn't quite as dismal as you seem to be making it out to be. > (I say "seems to be" because, frankly, I don't have the time to read > this post with the care it probably deserves.) Well, after thinking about it, I see many things that I may have distorted in my too long post. I should have taken more time, to make it shorter. Anyway, as a Scheme implementer yourself, you probably know much more than me about the ins and outs of continuations, activation frames, stacks and heaps. There remains the final questions in my post: has anyone added to Scheme (or another LISP) explicit support for linear objects (and hence linear continuations), especially in a concurrent environment? How to otherwise manage affordable access (and consistent semantics) for variables shared by concurrent continuations? How to otherwise seggregate internal (private) data from external (shared) data when duplicating objects and continuations, for purposes of migration, persistence, retry, backtracking, etc.? [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] No matter what language you use, a Cunning Compiler(TM) will be able to find an efficient implementation for whatever apparently difficult problem you specify. However, a Cunning Compiler(TM) is itself an AI-complete problem. From I+fare+WANT@tunes.NO.org.SPAM 03 Aug 1999 18:48:45 +0200 Date: 03 Aug 1999 18:48:45 +0200 From: fare@tunes.org I+fare+WANT@tunes.NO.org.SPAM Subject: Non-linear continuations considered harmful Fernando Mato Mira writes: > Man, is RScheme really the one for me. > If only native threads were there already.. The problem is that RScheme being Scheme, it has a lot of problems ahead with threads, GC, and first-class continuations. Ever since I learnt Scheme, I was seduced by its attempt at defining a clean language that would do the "Right Thing(TM)", yet be minimal, and trying to preserve the LISP way of providing a one dynamic reflective environment. There are still lots of things I dislike about Scheme (guessing what I dislike about it is left as an exercise to the reader), yet, until very recently, I thought it would be the best "Core Lisp" to use while designing and implementing a distributed persistent system. But as I tried to see how things could be actually done at the low-level, I was very disappointed about Scheme, and decided that overall, Scheme was not remotely the Right Thing, and that Worse is Worse. Once upon a time, after hearing about tail-call optimization, about LAMBDA being the ultimate imperative, etc, and about Scheme requiring implementations to be "properly tail-recursive" (whatever that means), I thought that indeed, Scheme allowed to express inductive computation schemes cleanly and equivalently as either iterative or (tail-)recursive idioms. Consider following constructs: (let loop () ... (if (not (zero? n)) (begin (set! n (- n 1)) (loop))))) (let loop ((n n)) ... (if (not (zero? n)) (loop (- n 1))))) I _naively_ believed that the above loops (where ... did not modify n) where essentially equivalent, the former being a "lower-level" version of the latter, into which the latter would actually compile, with the (loop) being a mere goto, etc. I sincerely believed that Scheme made iteration and tail-recursion equivalent. How stupid I was! Then, I tried to actually implement a Scheme compiler, and I was struck by reality: Scheme makes above looping constructs deeply different, and it prevents the immediate reuse of stack/heap activation frames with a simple goto, and this is all due to non-linear continuations that may appear with call/cc: call/cc allows to capture mutable variables that be "shared" among several instances of continuations. As long as a continuation is linear (i.e. used at most once), this doesn't make much difference, and indeed the two code snippets above are equivalent; for variables are "shared" by only one continuation/thread, so whether you modify an existing variable, or create a new one and discard the old one, the result is the same and the two techniques are undistinguishable, so one may be transformed into the other, for the sake of optimization. But as soon as you can reenter the continuation, which you must assume is possible unless you can prove that the continuation won't escape, then havoc wreaks and you can see the difference: mutated variables are shared among continuations, whereas new variables that mask previous variables of same name are not shared with other continuations. I wrote a small program to demonstrate the difference, and see if actual Scheme implementations did the Right Thing(TM) about correct implementation of non-linear implementations. Source code included below. All Scheme implementations that were installed on my debian machine (guile, MIT Scheme, gambc, elk, RScheme, scheme->c) correctly implemented the Scheme specification in presence of non-linear continuations (well, as far as RScheme and scheme->c are concerned, I did not manage to get the compilers working, so I only tested the interpreters). This means that Real Schemers(TM) (i.e. those having hacked an implementation) are already well-aware of the issue. Now, what are the implications of this semantic issue on implementation? It means that activation frames cannot in general be reused, and that "proper tail-recursion" is a much trickier concept than naive people like me thought it was, so that the kind of tail-call optimization that we naive people think make tail-recursion the "same" as iteration doesn't happen, or at least not in the simple way as we thought. People grown with C, Pascal, and other Algol descendants, or perhaps even grown with LISP and ML, may think it natural that a called function, at least conceptually, allocate an "activation frame" (on stack or on heap), where it stores its arguments and local variables (possibly merged from several enclosed LET bindings), as well as a "return continuation", made of a return address and the activation frame of the calling function. We think it natural that (mutable) variables be stored directly in these frames. Well, that can't be, because frames may be entered many times, and may also be captured at many different points of execution (at every uncontroled function call), at times when different sets of local labels have been activated or initialized. Frames must thus only contain write-once data, and do not contain mutable variables but only indirect pointers to variables allocated and shared on heap. Furthermore, capture of continuation consists either in pointing to a frame, or in copying frames. In the former case, frames must be completely read-only (never reused) after being atomically created (i.e. w/o possibility of continuation capture), which is quite tricky, since argument values need be accumulated somewhere before to be sent together to the body of a lambda or a let; this makes closure sharing practically impossible, although it is possible to store local variables in callee-save registers (which also makes continuation capture expensive); frames being read-only means a new frame must be created at every single uncontrolled call point so as to allow capture of a distinct continuation. In the latter case of continuation capture as frame copy, we must distinguish two notions of frames, "active" frames (maybe on a stack rather than the heap) and "passive" frames (on the heap). Capture consists in making a copy of currently active frames into passive ones. Active frames are recyclable during tail-recursion (which can only be done either if target frame has same size as source frame, or if a stack is used), whereas passive frames are strictly read-only. Constantly creating new frames is so slow that people do prefer copying frames, especially since continuations are seldom captured, at least in "usual" programming style; however, the former method does provide bounded-resource ("real-time") guarantees, and some implementations get the "best of both worlds" by having a local stack cache of active frames that eventually gets flushed into passive heap frames before getting too long. Having a "local cache" to accumulate values, be it a stack and/or a body of callee-save registers, is a necessity: according to the RnRS specification, multiple argument values to a lambda (or a let) must be thus accumulated in an implementation-defined order, before to be sent to the continuation; now, if a continuation is captured in the middle of argument computation, the remaining argument values must not be shared among multiple restarts of the continuation, which means either "passive" frames a generated at a monstruous rate, or we have at least one "active" frame. Callee-save registers can be assimilated as a lazy variant of the "save activation frame at every call site" technique, for part of the frame's data. Again, mutable variables cannot be stored directly in such registers, least it be proven that they won't be captured in a non-linear continuation. Another catch with capturing a frame in the middle of initializations is that when recycling a frame, local variables from previous frame should be reset to non-memory-consuming values (or popped off, if it's a stack) before any possibility of capture, least they leak memory forever All in all, there is much less freedom of implementation than could naively be thought of. With so little freedom, Scheme _as a dynamic programming system_ must be considered more of a low-level language than of a high-level language; just a low-level language for some virtual machine much more elaborate (and by many aspects more pleasing) than C or assembly. For instance, although we could naively think that the imperative set! style was lower-level than the pure tail-recurse-with-argument, it so happens that the latter is intrinsically faster to implement, since it only involves messing with the current activation frame and jumping, whereas the former involves not only activation frame handling, but also modifying data on heap with a read/write barrier. There is an abstraction inversion, whereby what is easier at the low-level is made harder at the Scheme level, whereas what is harder at the low-level is mader easier at the Scheme level. This accounts for an incompressible performance hit when implementing Scheme in a dynamic environment, as compared to implementing Common LISP, or other languages. The pervasive possibility of non-linear continuations through call/cc makes every use of mutable variables much more costly, and prevents many program transformations and according optimizations, including various forms of lambda-lifting. Since non-linear call/cc makes life so hard for mutable variables, that must be boxed out of the stack, it could have been expected that a language with call/cc would explicitly separate the concepts of variable binding and of mutable reference, as does SML/NJ; in this respect, this lack of orthogonality can be considered a legacy mistake in the Scheme design that was not done in ML. Because there is no way to declare special properties ofa programs, Scheme forces the implementation to always pay the price for the rare general case. This makes it very important for optimizing implementations to perform as good analyses as possible for variable mutability, object escape, object linearity, etc. Foremost, escaping continuations must be identified. Linear continuations and linear objects must be identified (an object can only be linear if so is the continuation that uses it). Mutable variables can be optimized on stack in presence of linear continuation. Now, proving that a continuation will stay linear is difficult: every use of a functions stored in shared mutable bindings, especially global bindings, and including standard predefined functions, is a "contamination" point where a continuation may be captured. Contamination may only be avoided by "protecting" use of the function with a binding of unescaping variables with early values of the functions, assuming we can trust "initial" values of such functions. Higher-order functions that execute functions passed in arguments or in data-structures will also have a lot of trouble to guarantee linearity of its execution. All in all, optimizing code written in a "normal" style is impossible in presence of separate compilation, incremental compilation, dynamic EVALuation, or otherwise capture of often-used global bindings. In absence of a way to explicitly declare properties of type, mutability, linearity, escape, etc, only a STAtic Language ImplementatioN may prove that continuations will stay linear and optimize Scheme in a correct way. It is argued that call/cc can be used to implementing exceptions (catch and throw) and threads, but these would have been more than satisfied with linear continuations (also, space-safe threads also want a way to reset the cc to null). The case for threads can even be developed: in presence of preemptive multitasking and/or of real multiprocessing (didn't Fernando long for the use of "native threads"?), how will sharing of variables happen? Can a foreign thread non-linearly capture the continuation of another thread at any point, breaking any attempt to optimize code in presence of linear continuation? Must variable accesses be all done on the heap, with locking done at every memory access, and refetching from memory done at every variable reference (C "volatile" variables)? The Scheme way of dodging complexity would be that we indeed pay such a worst-case price everywhere through the program. It seems to me that we _must_ acknowledge continuations as being essentially linear objects, although it might be possible to explicitly duplicate a captured linear continuation. Sharing and non-sharing are essential concepts in concurrent languages, and should be revealed to the user, instead of being mismanaged by an ignorant implementation, especially in presence of dynamic computation. Knowing which mutable object is "owned" by which thread and must be copied by value together with the thread, and which object is actually interfaced and should be copied by reference, is an essential point of program design that cannot be feasably decided a posteriori. How do existing "parallel/concurrent" dialects of Scheme and LISP do it? Does there exist a system with linear typing for concurrent programming (no, I don't mean Clean, unless it allows for communication)? I know of works by H.G.Baker on adding linearity analyses to LISP compilers; are there works on adding linearity _declarations_ to LISP dialects? [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] No man is an Iland, intire of it selfe; every man is a peece of the Continent, a part of the maine; if a Clod bee washed away by the Sea, Europe is the lesse, as well as if a Promontorie were, as well as if a Mannor of thy friends or of thine owne were; any mans death diminishes me, because I am involved in Mankinde; And therefore never send to know for whom the bell tolls; It tolls for thee. -- John Donne, "No Man is an Iland" ------>8------>8------>8------>8------>8------>8------>8------>8------>8------ (define call/cc call-with-current-continuation) (define (new-counter) (let ((count 0)) (lambda () (set! count (+ 1 count)) count))) (define count (new-counter)) (define (show . l) (for-each display l) (newline)) (define kk #f) (define block? #f) (define (e) (show "escape #" (count)) (call/cc (lambda (k) (if (not kk) (set! kk k))))) (define (init) (set! kk #f) (set! block? #f)) (define (l1 e n) (let loop () (e) (show "count down: " n) (if (not (zero? n)) (begin (set! n (- n 1)) (loop))))) (define (l2 e n) (let loop ((n n)) (e) (show "count down: " n) (if (not (zero? n)) (loop (- n 1))))) (define (f l n) (init) (l e n)) (define (f1 n) (f l1 n)) (define (f2 n) (f l2 n)) (define (blockk) (if block? (display "kontinuation blocked\n") (let ((k kk)) (set! kk #f) (set! block? #t) (k #f)))) (display "\nside-effected variable, top-level continuation\n") (f1 3) (blockk) (display "\npure recursion, top-level continuation\n") (f2 3) (blockk) (display "\nside-effected variable, normal continuation\n") (begin (f1 3) (blockk)) (display "\npure recursion variable, normal continuation\n") (begin (f2 3) (blockk)) From ebiederm+eric@ccr.net 04 Aug 1999 21:47:14 -0500 Date: 04 Aug 1999 21:47:14 -0500 From: Eric W. Biederman ebiederm+eric@ccr.net Subject: Non-linear continuations considered harmful I+fare+WANT@tunes.NO.org.SPAM (fare@tunes.org) writes: > > I _naively_ believed that the above loops (where ... did not modify n) > where essentially equivalent, the former being a "lower-level" version > of the latter, into which the latter would actually compile, with the > (loop) being a mere goto, etc. I sincerely believed that Scheme made > iteration and tail-recursion equivalent. How stupid I was! > > Then, I tried to actually implement a Scheme compiler, > and I was struck by reality: Fare begins to see the light. First consider the macro expanded version of your code: i.e. With variables and iteration done complete with lambda (lambda () ((lambda (n loop) (loop loop)) (* 1024 1024 1024 1024) (lambda (the-loop) (if (not (zero? n)) (begin (set! n (- n 1)) (the-loop the-loop)) nil)))) (lambda () ((lambda (loop) (loop loop (* 1024 1024 1024 1024))) (lambda (the-loop n) (if (not (zero? n)) (the-loop the-loop (- n 1)) nil)))) With everything expanded many more functions appear and the scope of n becomes clear. In the version with assignment it's longer life is simply because it is in a larger scope. Fare activation frames do not need to be read-only. Using a stack in scheme to hold activation frames is a practical optimization but because of (call/cc) and other closures it is tricky. However for a simple implementation consider placing all of the activation frames on the garbage collected heap. This frees the compiler from worrying about the lifetime of activation frames. Note: It is still important in this contect to do proper tail recursion optimization, otherwise the examples above will die due to lack of heap space. Basically proper tail recursion means that when b is the final function called by a, that when b is activated there are not references to a's activation frame in the closure b is passed. Allocation of activation frames on a stack, and combinding stack frames are important optimizations. Allowing good cache hit rates, and better memory usage. Unfortunantely I just know the basics. Eric From fare@tunes.org Fri, 6 Aug 1999 14:58:58 +0200 Date: Fri, 6 Aug 1999 14:58:58 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Non-linear continuations considered harmful > Fare begins to see the light. :-~ > With everything expanded many more functions appear and the scope of > n becomes clear. In the version with assignment it's longer life > is simply because it is in a larger scope. Sure. > Fare activation frames do not need to be read-only. Well, they do if they are on stack and that continuation capture consists in stack-copying. Or more precisely, read-only and write-once variables may be put on stack and destructively recycled/updated, but read/write variables must be on heap. Alternately, read/write variables may be put on stack iff they are never used with set!, but always with fluid-let instead (or fluid-set! ?). > However for a simple implementation consider placing all of the activation > frames on the garbage collected heap. This frees the compiler from worrying > about the lifetime of activation frames. There is no semantic problem with having activation frames in heap, only the fact that the possibility of _non-linear_ continuation capture prevents a whole lot of optimizations with respect to merging frames (and a globally copied stack can be considered as a global merging of all frames). But indeed, I wasn't clear on that topic in my previous message (nor was it fully clear in my mind at the time). > Note: It is still important in this contect to do proper tail recursion > optimization, otherwise the examples above will die > due to lack of heap space. In this context, tail-recursion optimization consists not in reusing frames (which is in general impossible, due to non-linear capture), but just in short-circuiting the value of the continuation in the new frame, so as to return directly to the caller's caller, instead of returning to the caller, who would in turn return an identical result to its caller. > Basically proper tail recursion means that when b is the final function > called by a, that when b is activated there are not references to a's > activation frame in the closure b is passed. Yup. > Allocation of activation frames on a stack, and combinding stack > frames are important optimizations. Allowing good cache hit rates, > and better memory usage. Unfortunantely I just know the basics. combining? Anyway, it appears that things become very tricky in presence of non-linear continuation capture... [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] NOTE: No warranties, either express or implied, are hereby given. All software is supplied as is, without guarantee. The user assumes all responsibility for damages resulting from the use of these features, including, but not limited to, frustration, disgust, system abends, disk head-crashes, general malfeasance, floods, fires, shark attack, nerve gas, locust infestation, cyclones, hurricanes, tsunamis, local electromagnetic disruptions, hydraulic brake system failure, invasion, hashing collisions, normal wear and tear of friction surfaces, comic radiation, inadvertent destruction of sensitive electronic components, windstorms, the Riders of Nazgul, infuriated chickens, malfunctioning mechanical or electrical sexual devices, premature activation of the distant early warning system, peasant uprisings, halitosis, artillery bombardment, explosions, cave-ins, and/or frogs falling from the sky. From water@tscnet.com Fri, 13 Aug 1999 09:33:14 -0700 Date: Fri, 13 Aug 1999 09:33:14 -0700 From: Brian Rice water@tscnet.com Subject: An Arrow Logic Introduction from the research community This is "A Crash Course in Arrow Logic", which was included in a book as the first chapter to help explain the notions involved and arrow logic's purpose and connections with other, more familiar logics. It's a very accessible read, although I only just recently located the electronic version, since my research and writing efforts are so hectic. Consider this a prelude to my new paper detailing the theory of Reflective Arrow Logic. ftp://ftp.phil.uu.nl/pub/logic/PREPRINTS/preprint107.ps.Z Enjoy! :) From iepos@tunes.org Fri, 13 Aug 1999 21:16:10 -0700 (PDT) Date: Fri, 13 Aug 1999 21:16:10 -0700 (PDT) From: iepos@tunes.org iepos@tunes.org Subject: paradoxes and intuitionist logic i've been doing a bit of thinking about the paradoxes and haven't come to any really good answers and am wondering if any of you have. one of the most paradoxes occurs when reasoning on a statement that says "this statement is not true". One first supposes that it is true; then it follows that it is not true. This is a contradiction, so the assumption must have been not true. So, the statement is not true; but this is precisely what the statement says, so we have admitted the statement. So there is an inconsistency in this logic. Unfortunately, the inconsistency is not caused merely by the funny nature of the English language; the argument can be formalized in a fairly simple sound-appearing logic using the Y combinator (or lambda term) to achieve the self-reference. When formalized, the argument usually uses a form of "reductio ad absurdum", a reasoning pattern that says that if something leads to a contradiction then it is not true. Some people have tried to avoid the problem by rejecting reductio ad absurdum along with the excluded middle, double-negation, and deMorgan's laws; these kinds of logics are called "intuitionist" I think. these logics usually retain the deductive theorem, or a set of axioms like these (I use "->" to represent implication (if-then)): I: x -> x B: (x -> y) -> (z -> x) -> z -> y C: (x -> y -> z) -> y -> x -> z W: (x -> x -> y) -> x -> y K: x -> y -> x some people seem to think that these logics solve the problem of paradoxes (Fare i think hints this in his lambda-nd paper), and I think that they do actually, _but not when a lambda (with a beta-reduction rule at least) or an equivalent set of combinators is admitted_. the fact is, if you have the previously mentioned W and I axioms along with lambda and modus ponens, then you're toast. anything can be proven, in this way (note the use of the Y combinator, which can be written as a lambda term)... 1. (Y x.x -> z) -> (Y x.x -> z) [by I rule] 2. (Y x.x -> z) -> (Y x.x -> z) -> z [by reduction of the second Y on #1] 3. (Y x.x -> z) -> z [by W rule, modus ponens on #2] 4. ((Y x.x -> z) -> z) -> z [by reduction of the Y on #3] 5. z [by modus ponens on #3 and #4] the argument is similar to the following: consider the statement "if this statement is true, then Z". Suppose it is true, then the condition would be met and Z follows; so if the statement is true, then Z. This is precisely what the statement says, so Z follows by modus ponens. it seems we must give up the deductive theorem (f leads to g, therefore 'f -> g') to remove the inconsistency. anyway, that seemed to be curry's conclusion in the first volume of his old book. hmmm... i'd like to know what others think now. one other note. I noticed that conjunction ("&") and disjunction ("|") have nice definitions in terms of universal quantification ("all") and implication: x&y = all z.(x -> y -> z) -> z x|y = all z.(x -> z) -> (y -> z) -> z the usual properties of & and | quickly follow. also, we define negation in this way: ~x = all z.x -> z now, consider '~x | x' (the excluded middle). Written in the terms above, we have: all z.((all f.x -> f) -> z) -> (x -> z) -> z now, the W rule follows from this excluded middle in a fairly simple system... we can instantiate the 'z' to 'x -> z' (for arbitrary 'z') and get: ((all f.x -> f) -> x -> z) -> (x -> x -> z) -> x -> z the head of the implication is an instance of one of the axiom schemas of a sensible system (universal instantiation), so '(x -> x -> z) -> x -> z' can be derived for arbitrary 'z' if 'x' obeys the excluded middle. this seems interesting because the W rule is the one that seems to be causing the problems, but it can be derived by very sound means if the subject obeys excluded middle. anyway... chew on this i guess. i'd like to know what you guys think... - iepos From core@suntech.fr Sat, 14 Aug 1999 14:47:39 +0200 Date: Sat, 14 Aug 1999 14:47:39 +0200 From: Emmanuel Marty core@suntech.fr Subject: Tunes OS review update Hello all, This is just to announce that the OS review has been updated; no major breakthrough in presentation, but new links, a new "dead projects" section as an attempt not to pollute the actually alive ones :), and a lot of new links. This will really need a database of sorts sometime.. URL is obviously still http://www.tunes.org/Review/OSes.html That's all :) -- Emmanuel From tcn@clarityconnect.com Sat, 14 Aug 1999 11:44:01 -0400 Date: Sat, 14 Aug 1999 11:44:01 -0400 From: Tom Novelli tcn@clarityconnect.com Subject: Tunes OS review update On Sat, Aug 14, 1999 at 02:47:39PM +0200, Emmanuel Marty wrote: > Hello all, > > This is just to announce that the OS review has been > updated; no major breakthrough in presentation, but new links, > a new "dead projects" section as an attempt not to pollute the > actually alive ones :), and a lot of new links. I like that.. you're net getting our hopes up about some OS, only to find it's dead. :) > This will really need a database of sorts sometime.. > > URL is obviously still http://www.tunes.org/Review/OSes.html Hey, I'll keep that in mind while I'm checking out *nix database systems. Now that I'm doing databases quite a bit at work, this looks easy. Maybe we could use Postgresql with a CGI script for searching/browsing, with maintenance directly through Postgresql. Is a database really necessary though? What can you do with one that you can't do without one? -- Tom Novelli http://bespin.tunes.org/~tcn From cpe2@gte.net Sun, 15 Aug 1999 14:31:48 -0400 Date: Sun, 15 Aug 1999 14:31:48 -0400 From: Ken Evitt cpe2@gte.net Subject: paradoxes and intuitionist logic Epimenides Paradox, also known as the liar paradox or the paradox of self-reference is attributed to Epimenides, a Cretan who made one immortal statement: "All Cretans are liars." A sharper version is simply "This statement is false." This paradox is a result of self-reference. And the issue of self-reference within an axiomatic formal calculus and its consequences have been laid out neatly by Kurt Godel and are formalized by his infamous result: Godel's Theorem. And what Godel showed was that any formal system is a member of one of two groups: either less complex than number theory, or at least as complex as number theory. And here's the rub: if a formal system is less complex than number theory, then it won't be very useful, and if it is as complex as number theory, then it is incomplete because of the system's ability to refer to itself. So it gets you coming and going; either way your formal system is incomplete because there are always statements, both true and false, that cannot be proven either way within any particular formal system. I definitely suggest you read _Godel, Escher, Bach_ for a lengthy discussion of essentially this issue. -Ken Evitt From iepos@tunes.org Sun, 15 Aug 1999 13:11:36 -0700 (PDT) Date: Sun, 15 Aug 1999 13:11:36 -0700 (PDT) From: iepos@tunes.org iepos@tunes.org Subject: paradoxes and intuitionist logic > Epimenides Paradox, also known as the liar paradox or the paradox of > self-reference is attributed to Epimenides, a Cretan who made one immortal > statement: "All Cretans are liars." A sharper version is simply "This > statement is false." > > This paradox is a result of self-reference. And the issue of self-reference > within an axiomatic formal calculus and its consequences have been laid out > neatly by Kurt Godel and are formalized by his infamous result: Godel's > Theorem. And what Godel showed was that any formal system is a member of one > of two groups: either less complex than number theory, or at least as > complex as number theory. And here's the rub: if a formal system is less > complex than number theory, then it won't be very useful, and if it is as > complex as number theory, then it is incomplete because of the system's > ability to refer to itself. > > So it gets you coming and going; either way your formal system is incomplete > because there are always statements, both true and false, that cannot be > proven either way within any particular formal system. yes. i've heard of godel's theorem, but i'll admit i don't really understand it. it is clear to me that if a system admits self-referential sentences and provides a way for the system to talk about its own theorems, then there will either be unprovable true sentences or provable untrue ones: the sentence with the meaning "this sentence is unprovable" will be such a sentence. however, i don't quite see how this problem can happen in simple number theories, ones that don't even provide a way for self-referential sentences much less a way to reference the system's own theorems. sure, one could extend the system by assigning arbitrary numbers to represent the system's theorems and proofs ("godel numbering"?) but this seems to be quite an extension to me, and it results in a new system that deserves to be incomplete. but anyway, i must be missing something, because godel's result seems to be well-respected. i'd be interested in seeing a simple victim number theory. > > I definitely suggest you read _Godel, Escher, Bach_ for a lengthy discussion > of essentially this issue. hmm. well, i've heard of the book but never read it. from what i've heard, it is entertaining but doesn't really give you much specific information... but i might read it anyway someday ... anyway. self-reference is an interesting thing. it seems to have the surprising quality of popping out of nowhere when you least expect it.... like the Y combinator, which can be built from simple innocent bases. anyway, i'm still thinking on my original question (well, i guess it was sort of a question): what are some good ways of formulating the class of statements (propositions)? it seems that some things don't have the property of the deductive theorem, and some don't have the excluded middle... well, saying that they do not is not correct, i think; perhaps it would be more correct merely to say that assuming the properties for some things leads to problems. anyway, i'm sort of interested in the relationship between the deductive theorem and the excluded middle... > > -Ken Evitt > well... thanks for your reply. i'm still interested in hearing from others. i really should start working on that system i mentioned a while ago. i'm horrible about starting projects and then not doing anything at all with them (:-)). actually, i've been doing some thinking about how I'll formalize the notion of time. i can't admit a word such as "now" into the system, because such a word would necessarily take on different meanings in different contexts, which would ruin the purity of the language; yet, doing things without such a word is tricky. anyway, the world will be modeled as an ordered set of states ("times", in a high-level sense, i'm not talking about a sequence of machine states); this may not really be a good model, given discoveries by Einstein (which i know quite little about), but i think it will do. this is meant to be a practical system... that is, i mean to actually implement it sometime, not just philosophize about it forever. - iepos From tcn@clarityconnect.com Sun, 15 Aug 1999 16:35:10 -0400 Date: Sun, 15 Aug 1999 16:35:10 -0400 From: Tom Novelli tcn@clarityconnect.com Subject: paradoxes and intuitionist logic On Sun, Aug 15, 1999 at 02:31:48PM -0400, Ken Evitt wrote: > Epimenides Paradox, also known as the liar paradox or the paradox of > self-reference is attributed to Epimenides, a Cretan who made one immortal > statement: "All Cretans are liars." A sharper version is simply "This > statement is false." How about "this statement is a waste of time"...? ;) Maybe Godel felt it worthwhile to work out his proof because nobody had *mathematically* shown there's no such thing as perfection. I think he was trying to say "Don't worry, you'll never get it right." Tom From fare@tunes.org Mon, 16 Aug 1999 02:44:25 +0200 Date: Mon, 16 Aug 1999 02:44:25 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: microkernels Dear Prof Shapiro, thank you for your e-mail. > Whatever the "truth" about microkernels may be, > a "glossary" isn't the right place for the discussion. Thanks for your most adequate remark. It seems our glossary has grown into something much too much polemical, and should be reorganized into a knowledge-base with raw facts being well delimited from associated opinions. We haven't invested in the technology necessary to develop such a knowledge base, having waited for TUNES to appear and provide this technology for too long. Do you have any suggestion? > At the very least the entry should give a definition > before it engages in polemics. > > How about adding a definition? Ok. Since I wasn't fully satisfied with the entry in FOLDOC, I wrote the one below. What about it? Can you either correct it, or point me to an existing satisfying definition? Now, as far as polemics go, what is your take on microkernels? Maybe you have pointers to articles that support or contradict some of my opinions, and that I may link to? Appended to this message is the text I prepended to the current entry. Best regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] My opinions may have changed, but not the fact that I am right. PS: I invite Tunes members to proof-read the current entry and the below text as well. My questions are also to you, guys. Be careful not to include Prof. Shapiro in replies that only concern Tunes. ------>8------>8------>8------>8------>8------>8------>8------>8------>8------ microkernel (also abbreviated µK or uK) is an approach to operating systems design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space".
  • microkernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performant.
  • The prototypical Microkernels is Mach, originally developed at CMU, and used in some free and some proprietary BSD Unix derivatives, as well as in the heart of GNU HURD. MICROS~1 Windows NT is said to originally have been a microkernel design, although one that with time and for performance reasons was overinflated into a big piece of bloatware that leaves monolithic kernels far behind in terms of size. Latest evolutions in microkernel design led to things like "nano-kernel" L4, or "exokernel" Xok.
  • At one time in the late 1980's and early 1990's, microkernels were the craze in official academic and industrial OS design, and anyone not submitting to the dogma was regarded as ridiculous. But microkernels failed to deliver their too many promises in terms of either modularity (see Mach servers vs Linux modules) cleanliness (see Mach horror), ease of debugging (see HURD problems), ease of dynamic modification (also see HURD vs Linux), customizability (see Linux vs Mach-based projects), or performance (Linux vs MkLinux, NetBSD vs Lites). This led some microkernel people to compromise by having "single-servers" that have all the functionality, and pushing them inside "micro"kernel-space (WindowsNT, hacked MkLinux), yielding a usual monolithic kernel under another name and with a contorted design. Other microkernel people instead took an even more radical view of stripping the kernel from everything but the most basic system-dependent interrupt handling and messaging capabilities, and having the rest of system functionality in libraries of system or user code, which again is not very different from monolithic systems like Linux that have well-delimited architecture-specific parts separated from the main body of portable code. With the rise of Linux, and the possibility to benchmark monolithic versus microkernel variants thereof, as well as the possibility to compare kernel development in various open monolithic and microkernel systems. people were forced to acknowledge the practical superiority of "monolithic" design according to all testable criteria. Nowadays, microkernel is still the "official" way to design an OS, although you wont be laughed at when you show your monolithic kernel anymore. But as far as we know, no one in the academic world dared raise any theoretical criticism of the very concept of microkernel. Here is our take at it. ------>8------>8------>8------>8------>8------>8------>8------>8------>8------ From fare@tunes.org Mon, 16 Aug 1999 01:25:16 +0200 Date: Mon, 16 Aug 1999 01:25:16 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: [shapj@us.ibm.com: ] --/9DWx/yDrRhgMJTb Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Another reason why we should move the Tunes pages into a free-shape knowledge base or so: allow association of opinions to symbols in a way that identifies the author without compromising the project at large... [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] We have not inherited the earth from our parents, we've borrowed it from our children. --/9DWx/yDrRhgMJTb Content-Type: message/rfc822 Received: from zhenghe (localhost) [127.0.0.1] (root) by ZhengHe with esmtp (Exim 3.02 #1 (Debian)) id 11G4eC-0007hM-00; Sun, 15 Aug 1999 20:04:44 +0200 Received: from bespin.tunes.org by localhost with POP3 (fetchmail-5.0.5) for fare@localhost (single-drop); Sun, 15 Aug 1999 20:04:44 +0200 (CEST) Received: from e1.ny.us.ibm.com (e2.ny.us.ibm.com [32.97.182.102]) by bespin.dhs.org (8.9.3/8.9.3/Debian/GNU) with ESMTP id LAA04348 for ; Sun, 15 Aug 1999 11:00:38 -0700 From: shapj@us.ibm.com Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com [9.117.200.22]) by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id OAA340720 for ; Sun, 15 Aug 1999 14:00:19 -0400 Received: from D51MTA03.pok.ibm.com (d51mta03.pok.ibm.com [9.117.200.31]) by northrelay02.pok.ibm.com (8.8.8m2/NCO v2.04) with SMTP id OAA173068 for ; Sun, 15 Aug 1999 14:00:38 -0400 Received: by D51MTA03.pok.ibm.com(Lotus SMTP MTA v4.6.4 (830.2 3-23-1999)) id 852567CE.0062EDD4 ; Sun, 15 Aug 1999 14:00:34 -0400 X-Lotus-FromDomain: IBMUS To: fare@tunes.org Message-ID: <852567CE.0062EC10.00@D51MTA03.pok.ibm.com> Date: Sun, 15 Aug 1999 13:58:28 -0400 Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline X-UIDL: b17d87a5186542b90a65c504c86afb44 Hey, guys. Whatever the "truth" about microkernels may be, a "glossary" isn't the right place for the discussion. At the very least the entry should give a definition before it engages in polemics. How about adding a definition? Jonathan Jonathan S. Shapiro, Ph. D. IBM T.J. Watson Research Center Email: shapj@us.ibm.com Phone: +1 914 784 7085 (Tieline: 863) Fax: +1 914 784 7595 --/9DWx/yDrRhgMJTb-- From shapj@us.ibm.com Sun, 15 Aug 1999 23:10:08 -0400 Date: Sun, 15 Aug 1999 23:10:08 -0400 From: shapj@us.ibm.com shapj@us.ibm.com Subject: microkernels --0__=K2tbEw39lUVRpflaDh1M7nu5mjHO1qF7AogfWfAITHB7RNuysB3kb3NJ Content-type: text/plain; charset=us-ascii Content-Disposition: inline I suggest the following changes in your new text. Change: The prototypical Microkernels is Mach, originally developed at CMU, To Perhaps the best known example of microkernel design is Mach, originally developed at CMU. There were many microkernels prior to Mach, one of which is the predecessor to Chorus. You should completely drop: Microsoft Windows NT is said to originally have been a microkernel design, although one that with time and for performance reasons was overinflated into a big piece of bloatware that leaves monolithic kernels far behind in terms of size. First, it simply isn't true. Dave Cutler (the NT architect) has said over and over again that NT was never a microkernel. The label was applied incorrectly by other people after the fact. Second, the polemic stuff simply doesn't add any value. Given good information followed by *reasoned* explanations, people are quite good at drawing correct conclusions. Polemics are therefore interpreted by intelligent readers as a place where the author actually didn't have any facts that supported their opinion and so resorted to name-calling. They conclude from this -- often correctly -- that the author doesn't know what they are talking about and should not be taken seriously. In short, I think this kind of stuff damages your credibility. Your comments about academics are equally inappropriate and more than a little offensive. I don't know of any academic who would laugh at a monolithic design proposal these days. EROS is a case in point. It's a major departure in design, and people asked hard questions about that. It is not a microkernel, but this has never come up as a design issue one way or the other. The kernel architecture is structurally similar to a microkernel, and has some carefully thought out layering. So does Linux. Nobody at the University of Pennsylvania laughed at it, and several commercial efforts based on it are now underway. If you have an example of an academic who has laughed at monolithic designs when research on them has been proposed, make that known. If you do NOT have several concrete examples, then your text is an undeserved slur on a lot of people, and it is a fundamentally dishonest thing to say. You are certainly entitled to that opinion, but a "glossary" is a place for statements of fact, not for opinions. I think it's worth noting that POSIX benchmarks are part of the problem, not part of the solution. The importance of these benchmarks became dominant with the releast of the lmbench microbenchmarks, and has been promulgated by a number of individuals in the OSDI and SOSP communities. One result is that most microkernel architects have been forced to work on getting lmbench numbers as a condition of publication rather than working on whatever made their system interesting. That is, the research has been compromised in a way that means we don't really know what the current generation of microkernels can do, and we are really unlikely to find out. As an aside, an argument can be made that this sort of politics is essential to the process of research. An argument *has* been made that for this reason science is primarily concerned not about fact but about orthodoxy, and is therefore indistinguishable from religion. My opinion is that such benchmarks are pragmatically important, but not ultimately very helpful. If your primary goal is to run UNIX, you will find that UNIX implementations are surprisingly good at running UNIX and everything else will tend to be less good at being UNIX. If this is your primary goal, a microkernel is inappropriate. In fact, after a certain point, adjusting a microkernel-based system to do well on POSIX benchmarks proves to be a good way to make it bad at doing microkernel stuff -- the stuff for which you built it in the first place. If, on the other hand, modularity, security, or debuggability are important, microkernels or systems like EROS that encourage dekernelized design may be a good choice. I have read about the problems that the HURD folks are having. These problems reflect poor architectural choices in the structure of HURD rather than any fundamental problem with microkernels. Finally, I think your arguments about modularity are not really to the point. That is, you are correct that disciplined programmers can achieve modularity in large systems, and that this is not a compelling advantage of microkernel systems from a theoretical point of view. From a practical point of view, with a lot of experience writing and modifying large systems, I can also say that the devil is in the details, and that various Linux subsystems have "cheated" for years before sufficiently modular interfaces became available and migration to them slowly occurred. It can be argued that this is good, but it should be noted that the resulting evolution was more often accidental than intentional. Also, your claims about the weak value of such modularity are most dependent on arguments about drivers, which are not by their nature isolatable. Anything with access to physical DMA is not terribly isolatable in real systems. The more critical concerns involve fault isolation *in applications*. HURD is not far enough along to really be able to exploit this, and without process persistence it will be hard to achieve. Driver fault isolation is largely an illusion, because driver errors leave hardware in states that take whole machines down. This is true whether the driver is in user-mode code or kernel-mode code. > Now, as far as polemics go, what is your > take on microkernels? Maybe you have pointers > to articles that support or contradict some > of my opinions, and that I may link to? I gave some in my netnews posting. My basic response is that you either have hard data supporting your position or you are simply shouting uninformed opinion. If you have the hard data, present it. All of the commentary I have seen on your web site is anecdotal. Much of it has been examined in the literature. Each of the problems you identify existed in some particular system or family of systems. Several present good arguments for why those individual systems made some poor choices. NONE of them, as far as I can tell, present a fundamental argument for why microkernels are unsound. All of the substantive issues you raise have been addressed in subsequent designs. For example, you complain about the shrinkage of microkernels into interrupt dispatch + messaging. A case in point, I think, would be Lava (the IBM derivative of L4). You present no data that this shrinkage is a bad idea, and you completely ignore the fact that these systems have primitives that are a factor of 100 to 1000 faster than Mach, which is the system you have the best data for. You also fail to note that the *reason* that UNIX on Mach failed didn't have anything to do with the time spent in the UNIX emulator, and had a *lot* to do with the time spent in the Mach primitives. ALL of the hard data has been collected makes this painfully clear. Bryan Ford has done a good paper on possible improvements in the Mach IPC implementation, and on the limits of such advances within the basic Mach architecture. Based on this, Lava ought to be very good at running Linux. In actual measurements, their published numbers show a degradation of 3% to 5% on typical workloads relative to native Linux. The Linux rehosting was a relatively untuned quick and dirty port, and it was later determined within the Lava group at IBM that the system benchmarked was suffering from a fairly serious bug in the microkernel's memory mapping mechanism. Current numbers are a good bit better. Now you may say that a 3% to 5% degradation is still a degradation. I agree, but remember that the primary purpose of Lava isn't running Linux. Also, remember that this is after an effort of only a small number of months, which argues that the modularity structure actually works very well. Personally, I don't think microkernels are better or worse. I think they are suited to a different set of problems. Jonathan S. Shapiro, Ph. D. IBM T.J. Watson Research Center Email: shapj@us.ibm.com Phone: +1 914 784 7085 (Tieline: 863) Fax: +1 914 784 7595 Francois-Rene Rideau on 08/15/99 08:44:25 PM Please respond to Francois-Rene Rideau To: Jonathan S Shapiro/Watson/IBM@IBMUS cc: tunes@tunes.org Subject: microkernels --0__=K2tbEw39lUVRpflaDh1M7nu5mjHO1qF7AogfWfAITHB7RNuysB3kb3NJ Content-type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-transfer-encoding: quoted-printable Dear Prof Shapiro, thank you for your e-mail. > Whatever the "truth" about microkernels may be, > a "glossary" isn't the right place for the discussion. Thanks for your most adequate remark. It seems our glossary has grown into something much too much polemical, and should be reorganized into a knowledge-base with raw facts being well delimited from associated opinions. We haven't invested in the technology necessary to develop such a knowledge base, having waited for TUNES to appear and provide this technology for too l= ong. Do you have any suggestion? > At the very least the entry should give a definition > before it engages in polemics. > > How about adding a definition? Ok. Since I wasn't fully satisfied with the entry in FOLDOC, I wrote the one below. What about it? Can you either correct it, or point me to an existing satisfying defini= tion? Now, as far as polemics go, what is your take on microkernels? Maybe you have pointers to articles that support or contradict some of my opinions, and that I may link to? Appended to this message is the text I prepended to the current entry. Best regards, [ "Far=E9" | VN: =D0=A3ng-V=FB B=E2n | Join the TUNES project! http:/= /www.tunes.org/ ] [ FR: Fran=E7ois-Ren=E9 Rideau | TUNES is a Useful, Nevertheless Expedi= ent System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing = System ] My opinions may have changed, but not the fact that I am right. PS: I invite Tunes members to proof-read the current entry and the below text as well. My questions are also to you, guys. Be careful not to include Prof. Shapiro in replies that only concern Tu= nes. ------>8------>8------>8------>8------>8------>8------>8------>8------>= 8------ microkernel (also abbreviated =B5K or uK) is an approach to operating systems design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space".
  • microkernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor. The rationale was that it would bring modularity in the system architec= ture, which would entail a cleaner system, easier to debug or dynamically mod= ify, customizable to users' needs, and more performant.
  • The prototypical Microkernels is Mach, originally developed at CMU, and used in some free and some proprietary BSD Unix derivatives, as well as in the heart of GNU HURD. MICROS~1 Windows NT is said to originally have been a microkernel desig= n, although one that with time and for performance reasons was overinflated into a big piece of bloatware that leaves monolithic kernels far behind in terms of size. Latest evolutions in microkernel design led to things like "nano-kernel" L4, or "exokernel" Xok.
  • At one time in the late 1980's and early 1990's, microkernels were the craze in official academic and industrial OS desi= gn, and anyone not submitting to the dogma was regarded as ridiculous. But microkernels failed to deliver their too many promises in terms of either modularity (see Mach servers vs Linux modules) cleanliness (see Mach horror), ease of debugging (see HURD problems), ease of dynamic modification (also see HURD vs Linux), customizability (see Linux vs Mach-based projects), or performance (Linux vs MkLinux, NetBSD vs Lites). This led some microkernel people to compromise by having "single-servers" that have all the functionality, and pushing them inside "micro"kernel-space (WindowsNT, hacked MkLinux)= , yielding a usual monolithic kernel under another name and with a contorted design. Other microkernel people instead took an even more radical view of stripping the kernel from everything but the most basic system-dependent interrupt handling and messaging capabilities, and having the rest of system functionality in libraries of system or user code, which again is not very different from monolithic systems like Linux that have well-delimited architecture-specific parts separated from the main body of portable code. With the rise of Linux, and the possibility to benchmark monolithic versus microkernel variants thereof, as well as the possibility to compare kernel development in various open monolithic and microkernel systems. people were forced to acknowledge the practical superiority of "monolithic" design according to all testable criteria. Nowadays, microkernel is still the "official" way to design an OS, although you wont be laughed at when you show your monolithic kernel an= ymore. But as far as we know, no one in the academic world dared raise any theoretical criticism of the very concept of microkernel. Here is our take at it. ------>8------>8------>8------>8------>8------>8------>8------>8------>= 8------ = --0__=K2tbEw39lUVRpflaDh1M7nu5mjHO1qF7AogfWfAITHB7RNuysB3kb3NJ-- From btanksley@hifn.com Mon, 16 Aug 1999 09:47:54 -0700 Date: Mon, 16 Aug 1999 09:47:54 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: paradoxes and intuitionist logic >From: iepos@tunes.org [mailto:iepos@tunes.org] >Subject: paradoxes and intuitionist logic >i've been doing a bit of thinking about the paradoxes and haven't >come to any really good answers and am wondering if any of you have. >one of the most paradoxes occurs when reasoning on a statement >that says "this statement is not true". One first supposes that it >is true; then it follows that it is not true. This is a contradiction, >so the assumption must have been not true. So, the statement is not >true; but this is precisely what the statement says, so we have >admitted the statement. So there is an inconsistency in this logic. >Unfortunately, the inconsistency is not caused merely by the funny >nature of the English language; the argument can be formalized in >a fairly simple sound-appearing logic using the Y combinator (or >lambda term) to achieve the self-reference. Only if you fail to typecheck your variables. >I: x -> x >B: (x -> y) -> (z -> x) -> z -> y >C: (x -> y -> z) -> y -> x -> z >W: (x -> x -> y) -> x -> y >K: x -> y -> x The trouble is, what are you allowed to substitute for the variables? Obviously, you can only substitute things which have possible boolean answers (as an example, you can't set x to any natural number and expect anything to make sense). Paradoxes cannot have boolean answers, so substituting them into these laws is simply ignoring type safety. Things which can be substituted into these laws are called 'propositions'. >- iepos -Billy From btanksley@hifn.com Mon, 16 Aug 1999 15:17:46 -0700 Date: Mon, 16 Aug 1999 15:17:46 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: FW: paradoxes and intuitionist logic This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_01BEE835.38CDCDF8 Content-Type: text/plain; charset="iso-8859-1" This got emailed to me instead of the list... Here it is. ------_=_NextPart_000_01BEE835.38CDCDF8 Content-Type: message/rfc822 Content-Description: Re: paradoxes and intuitionist logic Message-ID: <199908162212.PAA14540@bespin.dhs.org> From: iepos@tunes.org To: btanksley@hifn.com Subject: Re: paradoxes and intuitionist logic Date: Mon, 16 Aug 1999 15:12:54 -0700 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2448.0) Content-Type: text/plain; charset="iso-8859-1" [Charset iso-8859-1 unsupported, filtering to ASCII...] > >From: iepos@tunes.org [mailto:iepos@tunes.org] > >Subject: paradoxes and intuitionist logic > > >i've been doing a bit of thinking about the paradoxes and haven't > >come to any really good answers and am wondering if any of you have. > > >one of the most paradoxes occurs when reasoning on a statement > >that says "this statement is not true". One first supposes that it > >is true; then it follows that it is not true. This is a contradiction, > >so the assumption must have been not true. So, the statement is not > >true; but this is precisely what the statement says, so we have > >admitted the statement. So there is an inconsistency in this logic. > >Unfortunately, the inconsistency is not caused merely by the funny > >nature of the English language; the argument can be formalized in > >a fairly simple sound-appearing logic using the Y combinator (or > >lambda term) to achieve the self-reference. > > Only if you fail to typecheck your variables. > > >I: x -> x > >B: (x -> y) -> (z -> x) -> z -> y > >C: (x -> y -> z) -> y -> x -> z > >W: (x -> x -> y) -> x -> y > >K: x -> y -> x > > The trouble is, what are you allowed to substitute for the variables? > Obviously, you can only substitute things which have possible boolean > answers (as an example, you can't set x to any natural number and expect > anything to make sense). Paradoxes cannot have boolean answers, so > substituting them into these laws is simply ignoring type safety. This does seem to be the case, indeed. However, without the example of this kind of paradox there would be people who would be convinced that it would be okay to formulate implication for unrestricted obs, with "implications" like "1 -> 2" being taken as true, since "1" certainly is not true and thus the condition is unmet. although it seems quite dangerous to take "~1", i don't think it is correct or necessary to toss out "1 -> 2" entirely because it does not "typecheck"; instead, it seems best to me simply to refuse to apply certain reasoning patterns to it (reasoning patterns which are reserved to propositions). > > Things which can be substituted into these laws are called 'propositions'. this kind of restriction does seem to me to be the right way to go. However, it is not obvious that even this approach is safe. Before the paradoxes appeared, it seemed safe to formulate a set of truths (that is, a category, such that everything either belonged to it or did not belong to it), however it turns out that this leads to problems -- or does it? is there a paradox that uses the excluded middle but not the deductive theorem? now that i think about it, all the paradoxes i've seen involve the deductive theorem. anyhow, if it is unsafe after all to formulate a set of truths, then one should take caution formulating a set of propositions also (i suppose one could define it as an indefinite "set"; that is, one without an excluded middle). on the question of the excluded middle, the first paradox that comes to mind as a possible problem is russell's. for, consider the ob "x.~(x x)" (the set of all sets that do not contain themselves). If we call that set R, then by lambda we have 'R R = ~(R R)'. this seems on the verge of a problem, however there could be systems in which equality does not imply logical equivalence (i.e., implication both ways). actually, a system with lambda (or combinators) seems best formulated to me when equality is not taken as primitive at all, in which a beta-reduction can only take place on previously proved statements. However, if the I rule of deductive theorem is admitted, it must be restricted, because if it was not then we could have 'R R -> R R' and thus 'R R -> ~(R R)' by reduction; also we would have by I again '~(R R) -> ~(R R)'. then, by the excluded middle, we would have '~(R R)' (the excluded middle admits '(x -> y) -> (~x -> y) -> y'). but we would also have '~(R R) -> ~(~(R R))' and then an inconsistency. however, it seems to me that none of this would happen if we rejected the 'I' rule of deductive theorem, even in the presence of the excluded middle. hmmm... the excluded middle is still not obvious to me though. i would be interested if anyone knows of any paradoxes involving it but not the deductive theorem. on the other hand, is it possible that the excluded middle could be proven as I have formulated it ('all y.(x -> y) -> ((all z.x -> z) -> y) -> y'), at least if 'x' obeys the deductive theorem? > > >- iepos > > -Billy > hmm... thanks for your comments... could anyone else shed some light? - iepos ------_=_NextPart_000_01BEE835.38CDCDF8-- From iepos@tunes.org Mon, 16 Aug 1999 16:53:57 -0700 (PDT) Date: Mon, 16 Aug 1999 16:53:57 -0700 (PDT) From: iepos@tunes.org iepos@tunes.org Subject: FW: paradoxes and intuitionist logic in my last post, I missed something that seems important... > on the question of the excluded middle, the first paradox that comes > to mind as a possible problem is russell's. for, consider the > ob "x.~(x x)" (the set of all sets that do not contain themselves). > If we call that set R, then by lambda we have 'R R = ~(R R)'. > this seems on the verge of a problem, however there could be > systems in which equality does not imply logical equivalence (i.e., > implication both ways). actually, a system with lambda (or combinators) > seems best formulated to me when equality is not taken as primitive > at all, in which a beta-reduction can only take place on previously > proved statements. However, if the I rule of deductive theorem is > admitted, it must be restricted, because if it was not then > we could have 'R R -> R R' and thus 'R R -> ~(R R)' by reduction; > also we would have by I again '~(R R) -> ~(R R)'. then, by the excluded > middle, we would have '~(R R)' (the excluded middle admits > '(x -> y) -> (~x -> y) -> y'). but we would also have '~(R R) -> ~(~(R R))' > and then an inconsistency. however, it seems to me that none of this > would happen if we rejected the 'I' rule of deductive theorem, even > in the presence of the excluded middle. i think this is incorrect, except for possibly some very fragile formulations. the thing to note is that if we have unrestricted excluded middle then we have '(R R) | ~(R R)', which reduces to '~(R R) | ~(R R)'. However, this does not necessarily imply '~(R R)' (it would, if we had 'I' of course). however, under my formulation of '~' and '|', it would be possible to derive 'R R -> z' for arbitrary 'z'. if the system permits reverse-reduction of lambdas then it is possible to derive '(R R -> z) -> (R R -> z) -> z' for arbitrary 'z', and thus we'll be able to prove 'z', anything... so, it sounds like unrestricted excluded middle is just about as bad as unrestricted deductive theorem... hmm... so, it will need to be recognized that only some things ("propositions") have these properties. however, i don't see a clear intuitive way to tell if things do. in particular, is it a (false) proposition to state that a paradox is a proposition, or is it a paradox itself? some systems (Coq?) seem to fear this kind of issue and thus formulate the set of propositions _outside_ the system; that is, the set of propositions cannot be refered to within the system as a first-class set. this means a whole static type system is necessary. this is very messy, and it seems like there must be a better approach. actually, it would not be a terrible problem if "statements" of proposition-hood did not enjoy proposition-hood themselves, as long as the system recognized it. hmm... - iepos From fare@tunes.org Tue, 17 Aug 1999 04:14:32 +0200 Date: Tue, 17 Aug 1999 04:14:32 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: paradoxes and intuitionist logic >>: Ken Evitt >: iepos > yes. i've heard of godel's theorem, > but i'll admit i don't really understand it. Godel's theorem is a deep result about the fact that Truth cannot be interned within a consistent finitary system, but that Provability can, as soon as you have computable functions. Hence, you CANNOT write a sentence such as "this sentence is false", but you CAN write a sentence such as "this sentence is unprovable". > sure, one could extend the system by assigning arbitrary > numbers to represent the system's theorems and proofs ("godel numbering"?) > but this seems to be quite an extension to me, and it results in a new > system that deserves to be incomplete. Not quite. No need for of any system extension to write in it a godel numbering of itself (yes, the choice of numbering is arbitrary). However, of course, you need a meta-system in which to assert that a chosen numbering indeed adequately models the system within itself (however, you need that meta-system to talk about the system, anyway, and to describe its finitary structure). >> I definitely suggest you read _Godel, Escher, Bach_ Yeah, great book. Anything by D. Hofstadter is great. > anyway, i'm still thinking on my original question (well, i guess it > was sort of a question): what are some good ways of formulating the > class of statements (propositions)? Formulating it from where? Good according to what criteria? Regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] "As an adolescent I aspired to lasting fame, I craved factual certainty, and I thirsted for a meaningful vision of human life -- so I became a scientist. This is like becoming an archbishop so you can meet girls." -- Matt Cartmill From fare@tunes.org Tue, 17 Aug 1999 04:04:32 +0200 Date: Tue, 17 Aug 1999 04:04:32 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: microkernels Dear Prof Shapiro, > [...] Thanks a lot for your feedback about Mach, Chorus, and Windows NT. You indeed helped me remove quite some silliness in my glossary entry. > Second, the polemic stuff simply doesn't add any value. I agree it should be well delimited from raw data, but I think it does add value, or at least could, if enhanced. > Given good information > followed by *reasoned* explanations, people are quite good at drawing correct > conclusions. Certainly, and I will try to improve my style to make arguments clearer, and to remove gratuitously offensive parts. However, sometimes, it is useful to suggest an original point of view on well-known data than trying to establish it in detail from collected data. Imagination is as important a part of understanding as data. And I think my point of view is original since I never saw it expressed elsewhere in either academic papers, discussions with researchers, or the internet, except by people I had previously convinced of it. > Polemics are therefore interpreted by intelligent readers as a > place where the author actually didn't have any facts that supported their > opinion and so resorted to name-calling. They conclude from this -- often > correctly -- that the author doesn't know what they are talking about > and should not be taken seriously. In short, I think this kind of stuff > damages your credibility. I'm not sure I have much credibility to damage :( However, I acknowledge you're right, and will try to improve. Thanks. Also, even more intelligent people might try to see beyond the polemics, taking them as just a bore-proof tone, and weigh the arguments behind them. It mightn't be the habit within academia, but it sure is within usenet, and I feel this page is nearer to the latter than to the former. > Your comments about academics are equally inappropriate > and more than a little offensive. Well, I put the academics mostly on par with industrial developers; however, I feel academics if anyone do have a responsibility in detecting and fighting theoretical flaws in existing designs, and that they failed to their duty at least on this topic (that I happen to know a bit). Being an academic and working in an industrial lab myself, which lab once believed in microkernel design and learnt its interest and limits the hard way (Chorus hacking), I feel as entitled as anyone to make these remarks. Of course, my statements do not constitute an official academic contribution, either. > I don't know of any academic who would laugh at a monolithic design > proposal these days. *These days*. Such was not the case in the late 1980's and early 1990's, during the Microkernel craze, where it appears from reading OS design literature that microkernels were "obviously" the Way To Go(TM), and no one would fund a new OS project but microkernel based. and this literature is still crippling the minds of people interested in OS design. Nowadays, the craze is waning, and the buzzword "microkernel" has no more the hype value it used to have. Today, "Object-Oriented" is still high, and waxing is "Java". > EROS is a case in point. It's a major departure in design, > and people asked hard questions about that. There are also questions I ask myself about EROS... One is a low-level one, and has to do with committing change and IDE disks (since the ATA specifications says little about buffer flushing, as opposed to the SCSI specifications); I'm particularly curious since the documentation only talk about IDE drivers; this may be a silly question, but it still bothers me, since the OS tries to be "Extremely Reliable". Another one is higher-level has to do with "what language/API are reliable applications to be written in?". i.e. a system "kernel" is but a component of a system framework, and is useless without the rest of the framework. > It is not a microkernel, but this > has never come up as a design issue one way or the other. Good. > The kernel > architecture is structurally similar to a microkernel, > and has some carefully > thought out layering. So does Linux. I understand that as a confirmation to my arguments: the important issue is the high-level structuration of the system, and the concretization of the system structure into low-level barriers is but overhead that proves useless at best, and otherwise harmful. > Nobody at the University of Pennsylvania > laughed at it, and several commercial efforts based on it are now underway. I'm glad about it. I just hope for everyone's benefit that "commercial" won't equate to "proprietary" as too often in the past. > If you have an example of an academic who has laughed at monolithic designs > when research on them has been proposed, make that known. Well, the MINIX vs Linux "flamefest" of 1991 is such an example, where Andrew Tanenbaum contended that a monolithic kernel was an obsolete design that no academic would develop. >From testimony of other people having hacked OSes at that time, it looks like there was indeed a general academic consensus for microkernel design and against monolithic design. I guess I'd have to dig into old archives to make that into a strong point. Now, I could return the burden of the proof, and ask for evidence of research on monolithic kernels started between say 1988 and 1991, or of papers infirming a general bias towards microkernel design. Anyway, I'll make it clearer that this statement is but an opinion of mine. > If you do NOT have several > concrete examples, then your text is an undeserved slur on a lot of people, > and > it is a fundamentally dishonest thing to say. You are certainly entitled to > that opinion, but a "glossary" is a place for statements of fact, not for > opinions. You seem to have a particular view on what a glossary should be that be restricted to purely technical, neutral, glossaries. I invite you to read Ambrose Bierce's "The Devil's Dictionary" http://www.vestnett.no/cgi-bin/devil (listed at the end of the Tunes glossary) for an otherwise designed glossary. I do think technical glossaries are useful (if you know of ones, I will be glad to list them, too). When the Tunes glossary is moved to a database, I'd like it to have a "strictly technical" mode that would strip opinions and purely subjective definitions, together with opinionated modes that include (possibly diverging) comments by one or many different members (or non-members). But in the meantime, I conceive this glossary as a way to express my own opinions as well as as a technical document. I reject the concept of neutrality of technology or technique, or that of an aseptized science, anyway. Only by being able to err can we be able to be right. Of course, with the right to a strong opinion comes the responsibility to make it as good as possible, and to be open-minded in amending it. > I think it's worth noting that POSIX benchmarks are part of the problem, not > part of the solution. They are indeed part of the problem, although this problem is orthogonal to that of microkernel vs monolithic kernel design: they introduce a strong bias in the expected behavior of user programs; but whatever the behavior is, two barrier crossings will take more time than one barrier crossing, and a "multispace" system implementation will have a runtime overhead not present in a monospace one. Whatever benchmark you choose, and whatever microkernel-based system you choose, you'll improve the benchmark by folding the whole microkernel+servers into a one "monolithic" system, removing unnecessary barrier crossings. > One result is that most > microkernel architects have been forced to work on getting lmbench numbers > as a condition of publication rather than working > on whatever made their system interesting. This reminds me of a point I raised in my article Metaprogramming and Free Availability of Sources http://www.tunes.org/~fare/articles/ll99/index.en.html about proprietary software leading to software partitioning and the establishment of strong stable software patterns resiliant to any global innovation or argument of utility, because the only allowed improvements are ones that are purely local and preserve the global software structure that is protected by intellectual property barriers. > That is, the research has been compromised in a way that means we > don't really know what the current generation of microkernels can do, > and we are really unlikely to find out. I can but be sorry about that. I will be the last one to defend POSIX and UNIXish architectures, as you might know by browsing along the TUNES site. > As an aside, an argument can be made > that this sort of politics is essential to > the process of research. An argument *has* been made that for this reason > science is primarily concerned not about fact but about orthodoxy, and is > therefore indistinguishable from religion. I would contend that such is the unhappy case in a world where information is proprietary, but that such it shouldn't be, since information should be free (of rights). Although science cannot be devoid of political choices, just like any human activity, and cannot be devoid of opinions, just like any human thought process, its role however is to develop argumented choices and a posteriori opinions, as opposed to arbitrary choices and a priori opinions. The distinction between science and religion isn't in the static contents, but in the dynamic process. Proprietary information flaws the process. [Ahem. This kind of arguments should go to cybernethics@tunes.org rather than to tunes@tunes.org...] > If your primary goal is to run UNIX, you will find > that UNIX implementations are surprisingly good > at running UNIX and everything > else will tend to be less good at being UNIX. > If this is your primary goal, a microkernel is inappropriate. Indeed. > In fact, after a certain point, adjusting a > microkernel-based system to do well on POSIX benchmarks > proves to be a good way > to make it bad at doing microkernel stuff > -- the stuff for which you built it in the first place. Just what that stuff is, and how microkernel design helps about it, is most likely what I fail to understand. Whatever the programming model you choose to implement, UNIX or not, a microkernel-based system will be a poor choice as compared to a same-functionality monolithic-kernel-based system. > If, on the other hand, modularity, security, or debuggability are important, > microkernels or systems like EROS that encourage dekernelized design may be a > good choice. >From the little I read about EROS, it may be a good design; but I contend that microkernels are definitely not a good choice as far as modularity, security, or debuggability go. > I have read about the problems that the HURD folks are having. > These problems reflect poor architectural choices in the structure of HURD > rather than any fundamental problem with microkernels. I beg to differ. Their problems were directly due to having HIRDs of Unix Replacing DAEMONs, i.e. a heavily asynchronous programming model, for which no adapted debugging tools are known, manually implemented in a low-level language, which makes every single low-level bug a hell to track and debug. These guys followed the microkernel model to the letter, and were bitten by bitter reality. > Finally, I think your arguments about modularity are not really to the point. > That is, you are correct that disciplined programmers > can achieve modularity in > large systems, and that this is not a compelling advantage of microkernel > systems from a theoretical point of view. > From a practical point of view, with > a lot of experience writing and modifying large systems, > I can also say that the devil is in the details, > and that various Linux subsystems have "cheated" for years > before sufficiently modular interfaces became available and migration to > them slowly occurred. It can be argued that this is good, but it should be > noted that the resulting evolution was more often accidental > than intentional. I will contend that the ability to "cheat" and to delay decisions concerning system structure is precisely what make "monolithic" systems so much superior to "microkernel" systems: they don't force you into making early decisions at a time you just don't know enough yet to make it right; they allow you to reorganize things from a high-level point of view without having low-level details of barrier interface getting in the way; they allow you to integrate your objects in a one coherent system instead of desintegrating your system into lots of servers the coherence of which you have to manually ensure. Of course, every evolution step will be "accidental". The fact that evolution exists is not; it is a direct consequence of the programming model. [The case against "early optimization" has been largely developed on comp.lang.lisp in great posts by Kent Pitman or Erik Naggum, as the ultimate reason why C and C++ are bad as compared to LISP]. Time to re-read Alan Perlis' Epigrams... > Also, your claims about the weak value of such modularity > are most dependent on > arguments about drivers, which are not by their nature isolatable. > Anything > with access to physical DMA is not terribly isolatable in real systems. In as much as I understand your remark (not much), I disagree with it. Modularity is a *high-level* property of the system source design, and is independent from low-level binary failures that could possibly occur due to buggy components. An example of source-level modularity for device drivers is the Flux OSkit (although this modularity comes at the price of runtime COM invocations due to lack of infrastructural support for partial evaluation). Maybe you're thinking of some kind of component-wise fault tolerance within the system infrastructure, but I see no point in it, all the less in a system where all the source is visible and you don't fear "attacks" from within system components. See again my remark with the only possible justification for microkernels being third party proprietary black-box system components. > The more critical concerns involve fault isolation *in applications*. > HURD is not far enough along to really be able to exploit this, > and without process persistence it will be hard to achieve. I fail to see how HURD does any better than Linux in this respect; note that "middleware" already exists to allow for process persistence under Linux. > Driver fault isolation is largely an > illusion, because driver errors leave hardware in states that take whole > machines down. This is true whether the driver is in user-mode code or > kernel-mode code. Indeed. Again, I see that as backing my argument for modularity in the system design being useful only at a high-level (source-level), and utterly useless (and even harmful) at the low-level (binary). >> Now, as far as polemics go, what is your >> take on microkernels? Maybe you have pointers >> to articles that support or contradict some >> of my opinions, and that I may link to? > > I gave some in my netnews posting. I fear I missed that posting. Do you have a URL (possibly through deja.com)? I admit one of the reasons why I put my opinions in the glossary, besides the fact that I originally intended that glossary as much as a manifesto as as a technical document, is that I hate the ephemeral nature of netnews posts as contenders of opinions you have and have to repeat every so often. Moreover, a web document is something that can be incrementally improved upon, just as I hope I'll be doing thanks to your feedback. > My basic response is that you either have > hard data supporting your position or you are simply shouting uninformed > opinion. I think that things are not black or white as you depict. There are such things as established facts, but even among them, there is no such thing as absolute certainty. And there are very partially informed opinions, but even among them, not all are useless. My opinions may not be perfectly informed, I still think there is something about them that should be said. However, I do agree that it should be made clearer what is not well-established about them; and of course, I am most willing to make them better in the light of more eventual information, although my current arguments make me think it unnecessary to gather more information to constitute my opinion. > If you have the hard data, present it. All of the commentary I have > seen on your web site is anecdotal. Much of it has been examined in the > literature. > Each of the problems you identify existed in some particular system > or family of systems. > Several present good arguments for why those individual > systems made some poor choices. > NONE of them, as far as I can tell, present a > fundamental argument for why microkernels are unsound. > All of the substantive > issues you raise have been addressed in subsequent designs. I've tried to synthetize a simple, useful, coherent, predictive point of view, i.e. to understand the general phenomenon, out of well-known experience on microkernels. I'm sorry that I couldn't convince you, and even more sorry that my attempt appeared to you as a mere enumeration of uninformative factoids. I fail to see how subsequent designs addressed the fundamental question I raise about the abstraction inversion that constitutes the very core of microkernel design. > For example, you complain about the shrinkage of microkernels into interrupt > dispatch + messaging. If you understood that, then I definitely must rewrite my entry. I do think that L3, L4, Lava, Fiasco or Xok are quite interesting designs, and definitely improvements over Mach and "traditional" microkernel designs. But I also think the included improvements are independent from their being used as "microkernels" versus their being used as inner parts of a monolithic kernel, and that the first case (microkernel) will only lead to the usual decrease in performance to be expected from microkernel design. i.e. L4 et al. are interesting, but the microkernel aspect of them is part of the problem set, not of the solution set. BTW, why didn't IBM just free L4/Lava? is the issue now solved? > Based on this, Lava ought to be very good at running Linux. [...] > Now you may say that a 3% to 5% degradation is still a degradation. Indeed, and I cannot imagine that adding new unjustified barrier crossings could fail to introduce performance degradation. And of course > I agree, but remember that the primary purpose of Lava isn't running Linux. And what is the purpose of Lava? Allowing the implementation of a few real-time threads? Or of specific memory-mapping mechanisms? How does it make it better than directly hacking and enhancing the Linux or {Free,Net,Open}BSD kernel? (see RT-Linux, etc.) Certainly, Linux, BSDs, and other kernels have a lot of complexity and idiosyncrasies that you have to live with when adding such functionality. But such will be the case with any other same-functionality system, even Lava-based. And if you want to have a "simpler" system, you can strip a kernel such as Linux or *BSD to just what you want, or you can synthetize it with the Flux OS Kit or similar technology. In any case, I fail to see what a microkernel brings, except for some overhead and useless impedance to which to adapt. > Also, > remember that this is after an effort of only a small number of months, which > argues that the modularity structure actually works very well. Yes, but this is hardly an argument FOR a microkernel; it shows that *Linux* is a usefully modular yet monolithic system. > Personally, I don't think microkernels are better or worse. > I think they are suited to a different set of problems. I think microkernels are fundamentally flawed, as a low-level attempt to solving a high-level problem, namely modularity in system design. I'm sorry for having dodged your requests for providing hard data and less opinionated comments. Give me a laboratory and fund such research, and I might , although I wonder what kind of data would satisfy you (either way). Meanwhile, all I can give is opinions constituted during the copious free time left by my main PhD research: the semantics of reflective concurrent systems. Best regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] As long as software is not free, we'll have hardware compatibility, hence bad, expensive, hardware that has decades-long obsolete design -- Faré From s720@ii.uib.no Tue, 17 Aug 1999 12:41:42 +0200 Date: Tue, 17 Aug 1999 12:41:42 +0200 From: Thomas M. Farrelly s720@ii.uib.no Subject: What is a kernel? I've been reading the 'microkernel' thread and I am a tad bit puzzled. Is Farè really pro monolithic kernel design? Possibly. He is definitely carrying pro arguments. I understand the reason a monolithic kernel would be easier to make efficient. This is because more things can be assumed, which in turn can be statically determined. The reason more things can be assumed is that when all the kernel stash is in one big object ( not literally ), many of the critical operations of the kernel can be "private". So you don't have to worry about keeping a consistent state all the time, because nothing else could tap in and observe. In a micro kernel, the communication between application and kernel is more intense - and the kernel has little apriori knowledge about the application, so it has to behave according to strict patterns and rules. In other words a kind of bureaucracy, which in it self demands resources. One important point in TUNES is fine grain modularity. I don't see how a monolithic kernel could be fine grained. Wouldn't that be no kernel at all, or even, dependent on our definition of kernel, a micro kernel with some framework added. But techincally you could have the efficiency of a huge static chunk of program with the modularity of a micro kernel design. Because there are ways to determine static dependecies and recompile parts of a program. But that's another thread. Two stupid questions: What if you had a monolithic kernel and wanted to make it into a micro kernel. You start by cutting things from the kernel and pasting them into their own objects or files. The objects you are creating form the framework. As the kernel gets smaller the framework gets bigger. At some point you have a micro kernel with a framework. What if you continue decompositioning the kernel - what would you end up with? ( Remember that a kernel is not just a system but a program wich initiates the system - a bootcode. If this is all wrong, then search-erase 'bootcode' from here :) A bootcode + a modularized monolithic kernel B bootcode + a framework C an even smaller kernel The question is really: Isn't 'bootcode' the only part of the kernel which cannot be put in the so called framework? I know this is naive to think because there are other lowlevel things, like threads and processes, interrupts and hardware, which cannot easily be conseptualized in a way that lets you put them in the framework. And this is basically question number two: Which of the following does not fit in a microkernel: I/O processes bootcode ADT's GUI the stack the heap security memory management ( GC ) networking minesweeper =============================================================================== Thomas M. Farrelly s720@ii.uib.no www.lstud.ii.uib.no/~s720 =============================================================================== From fare@tunes.org Tue, 17 Aug 1999 04:59:21 +0200 Date: Tue, 17 Aug 1999 04:59:21 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: FW: paradoxes and intuitionist logic On Mon, Aug 16, 1999 at 04:53:57PM -0700, iepos@tunes.org wrote: > in my last post, I missed something that seems important... >> on the question of the excluded middle, the first paradox that comes >> to mind as a possible problem is russell's. for, consider the >> ob "x.~(x x)" (the set of all sets that do not contain themselves). >> If we call that set R, then by lambda we have 'R R = ~(R R)'. And the paradox is that since by excluded middle, you have either R R or ~(R R), then by this equivalence, you have both a proposition and its negation, and hence a contradiction. > so, it sounds like unrestricted excluded middle is just about as > bad as unrestricted deductive theorem... Just what do you call "deductive theorem"? Note that removing the rule of excluded middle is precisely what intuitionnistic logics and constructive logics are all about... they amount to considering only the objective provability of statements, not their hypothetical and unreachable (hence meaningless) "truth"... > some systems (Coq?) seem to fear this kind of issue and thus formulate > the set of propositions _outside_ the system; that is, the set of > propositions cannot be refered to within the system as a first-class > set. this means a whole static type system is necessary. Type systems are the way we've done ever since Russell & Whitehead's Principia Mathematica. > this > is very messy, and it seems like there must be a better approach. So I'd like to believe. Regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] The Constitution may not be perfect, but it's a lot better than what we've got! From beholder@ican.net Tue, 17 Aug 1999 22:27:48 -0400 Date: Tue, 17 Aug 1999 22:27:48 -0400 From: Pat Wendorf beholder@ican.net Subject: Interesting article Here's and interesting article geared toward game developers. It discusses using "better" languages to write game logic (better than C++). http://gamasutra.com/features/19990813/languages_01.htm -- ------------------------- Pat Wendorf beholder@unios.dhs.org ICQ: 1503733 ------------------------- He who clings to his work will create nothing that endures. From jmarsh@serv.net Tue, 17 Aug 1999 07:59:14 -0700 Date: Tue, 17 Aug 1999 07:59:14 -0700 From: Jason Marshall jmarsh@serv.net Subject: Runtime invariance (was: Re: What is a kernel?) "Thomas M. Farrelly" wrote: > One important point in TUNES is fine grain modularity. I don't see how a > monolithic kernel could be fine grained. Wouldn't that be no kernel at > all, or even, dependent on our definition of kernel, a micro kernel with > some framework added. It can't, but it is my belief that a kernel isn't necessary. I have been formulating techniques (someone may well have beaten me to this punch, I really need to do some in-depth searching on this) for turning loosely-coupled, dynamically loaded code into a, well, a temporary monolith, through the idea of runtime invariance. Runtime invariance, in a nutshell (and by my definition), is a special case of low-volatility variables within a system of objects. Many robust, modular systems have sets of features the user (be it human, or another object) never use, either in a global frame, or within a certain class of usage, and yet they pay a premium to have those features available. A fully introspective system should, over time, be able to determine, for instance, that the computer has only one hard drive, so it can eleminate a number of enumerations and thread safety constructs and just assume one drive. It should be able to determine that the user is using iso-latin1 ubiquitously, and short-circuit Unicode support within the system, For the 'highly unlikely' contingencies, one can set write-barriers in appropriate places (this may require some avoidance of mutator inlining) Jason Marshall From fare@tunes.org Tue, 17 Aug 1999 15:32:18 +0200 Date: Tue, 17 Aug 1999 15:32:18 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: What is a kernel? > Is Farè really pro monolithic kernel design? Possibly. He is definitely > carrying pro arguments. I am against microkernel, that's different. My position, that I've developed in the past, is against any kernel whatsoever. Search the mailing-list archive for "no-kernel" or "no kernel". > I understand the reason a monolithic kernel would be easier to make > efficient [...]. > One important point in TUNES is fine grain modularity. I don't see how a > monolithic kernel could be fine grained. Again, that's a matter of looking at things from the right point of view. We want modularity _at the high-level_, and efficient folding _at the low-level_. This means modular high-level objects in a suitable high-level language compiled into efficient low-level code at the binary level, without harmful runtime barrier to cross. At the high-level, it's modular. At the low-level, it looks like the whole TUNES universe runs into a all-encompassing monolithic "kernel". > Wouldn't that be no kernel at all, Yup. > or even, dependent on our definition of kernel, a micro kernel with > some framework added. Well, in as much as the CPU architecture forces upon us things like privilege levels as soon as we want to use paging, you might look at it this way. But there could be many services that don't care the least about current CPU privilege and don't force a barrier switch. > But technically you could have the efficiency of a huge static chunk of > program with the modularity of a micro kernel design. Repeat after me: the micro kernel doesn't help with high-level modularity in any way. microkernel is about uselessly multiplying low-level barrier crossings. > What if you had a monolithic kernel and wanted to make it into a micro > kernel. Then I'd recommend a psycho-analysis. Once again: micro kernel isn't about high-level modularity, but low-level barriers. High-level modularity is gained by using a high-level modular language, such as CommonLISP (Genera), SML (Fox), Modula-3 (SPIN), Oberon (Native Oberon), Erlang (Erlang/OTP), etc. Regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] The more one knows, the more one knows that one knows not. Science extends the field of our (meta)ignorance even more than the field of our knowledge. From s720@ii.uib.no Wed, 18 Aug 1999 00:02:47 +0200 Date: Wed, 18 Aug 1999 00:02:47 +0200 From: Thomas M. Farrelly s720@ii.uib.no Subject: What is a kernel? Francois-Rene Rideau wrote: > > > Is Farè really pro monolithic kernel design? Possibly. He is definitely > > carrying pro arguments. > I am against microkernel, that's different. > My position, that I've developed in the past, > is against any kernel whatsoever. > Search the mailing-list archive for "no-kernel" or "no kernel". > > > I understand the reason a monolithic kernel would be easier to make > > efficient [...]. > > > One important point in TUNES is fine grain modularity. I don't see how a > > monolithic kernel could be fine grained. > Again, that's a matter of looking at things from the right point of view. > We want modularity _at the high-level_, > and efficient folding _at the low-level_. > This means modular high-level objects in a suitable high-level language > compiled into efficient low-level code at the binary level, > without harmful runtime barrier to cross. > At the high-level, it's modular. > At the low-level, it looks like the whole TUNES universe > runs into a all-encompassing monolithic "kernel". Yes, and for that you need some way to determine dependencies among the high-level modules in order to create efficient lowlevel code. I recall the term 'absorbation' used in this context. I could be wrong, but imagine that the process taking some highlevel representation and mapping it to a lowlevel representation is called an absorbation. Then there are some interesting things to know about the absorbation mechanism before you even start designing it. Lets call it A(O) - absorbing an object O. It must be context dependent, so lets rather call it A(C,O). The reson it neccasarily at first glance must be context dependent is: If the system is reflective, it could in some context alter A , and then you'll have two different A's depending on context. But there are other reasons why it should be context dependent. First lets look at the C, the context. C should consist of _both_ the highlevel definition of the context _and_ the corresponding lowlevel definition. In fact, the highlevel definition corresponds to the structure of the context or the way it should be interpreted, while the lowlevel definition corresponds to the state of the context. Now A can inspect O. Where O references its context, A would look in C using the structure ( for typechecking or just figuring out where things are ) and build A(C,O) so that it directly references the corresponding state. This is no problem because A has all the information it needs. Now, the interesting things. First of all A(C,O) is a function. But is the inverse of A(C,O) a function? The inverse of A(C,O) would be something corresponding to decompilation or reverse compilation. But not decompilation of an entire program, but rather one step in the decompilation of something. Now, that means that if you did the inverse of A(C,O) on all objects in the system you would have built C. But C is the context and you allready know that by defintion - its something you keep track of during runtime. Ok - the inverse of A(C,O) is pointless to implement because you'll never need it. This means that it's always possible to reason about something in a highlevel way, because the decompilation is trivial. And resoning at the highlevel can seemlessly be translated to the lowlevel for efficiency. So that is one pro. Another pro is: The implementation of A requires C to be kept track of at runtime, and this appears to be a major drawback, because it can potentially be a costy keeping track of. The good news is that only A(C,O) will contribute any changes to C because it _is_the_ absorbation mechanism, i.e. that which takes some subject and absorbs it into context, thus altering the context. Well, basically you get the snappy modulized feel on the outside (highlevel) while it's all a big chunk on the inside. It's duck philosophy - nice and calm over water but below the surface it paddles like hell. For consitency's sake I'll include a problem about A(C,O) as well. Imagine that you do A1(C,A2). That is you make the system absorbe a new definition of its absorbation mechanism. It doesn't cause an endless recursive call to A, but all hell breaks loose anyway. Because everything in context is dependent on A, so the whole context would need to be recompiled. But this is really a standard problem that is unavoidable at some point anyway - if you want reflection that is. You could allways tell the user that "Disk in drive A: is full, or atempt to absorbe A, or GURU 3f2f3:f323f - have a nice day." > > or even, dependent on our definition of kernel, a micro kernel with > > some framework added. > Well, in as much as the CPU architecture forces upon us things like > privilege levels as soon as we want to use paging, > you might look at it this way. > But there could be many services that don't care the least about current > CPU privilege and don't force a barrier switch. > So just assume that these CPU privilege stuff never existed during design of the system. > > But technically you could have the efficiency of a huge static chunk of > > program with the modularity of a micro kernel design. > Repeat after me: > the micro kernel doesn't help with high-level modularity in any way. > microkernel is about uselessly multiplying low-level barrier crossings. the micro kernel doesn't help with high-level modularity in any way. microkernel is about uselessly multiplying low-level barrier crossings. [ actually, I used cut and paste ] > > > What if you had a monolithic kernel and wanted to make it into a micro > > kernel. > Then I'd recommend a psycho-analysis. > > Once again: > micro kernel isn't about high-level modularity, but low-level barriers. > High-level modularity is gained by using a high-level modular language, > such as CommonLISP (Genera), SML (Fox), Modula-3 (SPIN), > Oberon (Native Oberon), Erlang (Erlang/OTP), etc. > Yeah sure, but the kernel is there in the system in some way or the other. The question is what is neccary _inside_ the kernel ? And how will highlevel modularity make the kernel non existent. I'm beginning to feel like 'kernel' is just silly - very silly indeed. ( notice how I totally agree with you that no-kernel is the way to go, It's just that I do not know what to remove or add in order to get there. I mean it when you talk about big and small and even non existen kernels, it sounds more like an assult to information theorists. ) I try again: Which of the following does not fit in a microkernel: I/O processes bootcode ADT's GUI the stack the heap security memory management ( GC ) networking minesweeper =============================================================================== Thomas M. Farrelly s720@ii.uib.no www.lstud.ii.uib.no/~s720 =============================================================================== From fare@tunes.org Wed, 18 Aug 1999 00:34:36 +0200 Date: Wed, 18 Aug 1999 00:34:36 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Runtime invariance (was: Re: What is a kernel?) > It can't, but it is my belief that a kernel isn't necessary. > I have been formulating techniques > (someone may well have beaten me to this punch, I really need to do > some in-depth searching on this) > for turning loosely-coupled, dynamically loaded code > into a, well, a temporary monolith, through the idea of runtime invariance. This kind of technique has already been used, albeit in a semi-manual way, in some OS kernels by Sun, thanks to external expertise: it consists in doing partial-evaluation at run-time, a technique whose leaders are the Compose team at irisa.fr (unhappily, people unconvinced of the necessity of free software). Unlike what you propose, the run-time partial-evaluation they do is based on explicit manually-specified patterns, instead of introspection. Very special cases of dynamic optimization based on (stubborn) introspection exist in SELF to dynamically optimize method dispatch for the most used cases. I agree that the kind of things you describe (i.e. dynamic metaprogramming of the system based on introspection) is what we'd ultimately expect from TUNES. Regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] Is eating the flesh around one's nails is considered as anthropophagy, and does it qualify one for eternal damnation to burn in hell? From jecel@lsi.usp.br Wed, 18 Aug 1999 14:17:11 -0300 Date: Wed, 18 Aug 1999 14:17:11 -0300 From: Jecel Assumpcao Jr jecel@lsi.usp.br Subject: Tunes OS review update Tom Novelli wrote: > > On Sat, Aug 14, 1999 at 02:47:39PM +0200, Emmanuel Marty wrote: > > Hello all, > > > > This is just to announce that the OS review has been > > updated; no major breakthrough in presentation, but new links, > > a new "dead projects" section as an attempt not to pollute the > > actually alive ones :), and a lot of new links. > > I like that.. you're net getting our hopes up about some OS, only to find > it's dead. :) I really liked this as well. This page is a wonderful resource and I hope links to it get spread to all the more frequently used reference pages. "Merlin OS" is now called "Self/R", where the "R" is for "reflective". The idea is that the language show be the most visible part in the system, not the OS. I still use the name "Merlin" for the hardware, however. > > This will really need a database of sorts sometime.. > > > > URL is obviously still http://www.tunes.org/Review/OSes.html > > Hey, I'll keep that in mind while I'm checking out *nix database systems. > Now that I'm doing databases quite a bit at work, this looks easy. Maybe we > could use Postgresql with a CGI script for searching/browsing, with > maintenance directly through Postgresql. > > Is a database really necessary though? What can you do with one that you > can't do without one? "Database of sorts" can mean even some text formatted in columns (like most Unix configuration files). Having something like that would allows several views of the list to be presented, which is certainly interesting. I was going to write an "OS table CGI" in Python but haven't found the time yet. Since I am currently wrapping up the stuff I started in 1983, you can be sure I'll get to this eventually :-) -- Jecel From core@suntech.fr Wed, 18 Aug 1999 22:25:56 +0200 Date: Wed, 18 Aug 1999 22:25:56 +0200 From: Emmanuel Marty core@suntech.fr Subject: Tunes OS review update Jecel Assumpcao Jr wrote: > Tom Novelli wrote: [OS review dead projects sections] > > I like that.. you're net getting our hopes up about some OS, only to find > > it's dead. :) > > I really liked this as well. This page is a wonderful resource > and I hope links to it get spread to all the more frequently > used reference pages. Wow, thanks both of you :) I'm glad you like the new section and the page. I like to read about OS projects, so the updates just come from my bookmark :) Let's not forget that Fare came up with a good 3/4 of the contents; I took it over, added a lot of comments on the projects and added a bunch of projects/reorganised a little, but it's based on his work :) > "Merlin OS" is now called "Self/R", where the "R" is for "reflective". > The idea is that the language show be the most visible part in > the system, not the OS. I still use the name "Merlin" for the hardware, > however. OK.. I'll change that. And I guess it'd be good measure to have a look at your pages again :) > > > This will really need a database of sorts sometime.. > > > > > > URL is obviously still http://www.tunes.org/Review/OSes.html > > > > Hey, I'll keep that in mind while I'm checking out *nix database systems. > > Now that I'm doing databases quite a bit at work, this looks easy. Maybe we > > could use Postgresql with a CGI script for searching/browsing, with > > maintenance directly through Postgresql. > > > > Is a database really necessary though? What can you do with one that you > > can't do without one? (That's for Tom :) Well, obviously we already do without one. The main points of putting all the links in a database would be: allowing for searches and easy category sorting; allowing user searches; and automatically checking for dead links by periodically trying all the links in the database (you could go through the HTML document parsing it right now, but it'd be a lot more work); you could even allow users to enter and store an alternative URL for a dead link.. things like that. If you want to look into it, you're more than welcome to :) > "Database of sorts" can mean even some text formatted in columns > (like most Unix configuration files). Having something like that > would allows several views of the list to be presented, which is > certainly interesting. True.. It doesn't have to be a full-blown sql-based system, but at least some way to ease maintenance.. I update the review when I have something like 2 or 3 hours of free time to go through all the links and then write up new reviews because it's quite prohibitive. > I was going to write an "OS table CGI" in Python but haven't found > the time yet. Since I am currently wrapping up the stuff I started > in 1983, you can be sure I'll get to this eventually :-) Hehe but hopefully not all of my kids will be done with university yet ;) (I don't have kids at the moment, even if I found the "recipient" for them, so that should tell you how long that would be :) Thanks a lot for the comments, keep them coming :) -- Emmanuel From iepos@tunes.org Wed, 18 Aug 1999 15:17:45 -0700 (PDT) Date: Wed, 18 Aug 1999 15:17:45 -0700 (PDT) From: iepos@tunes.org iepos@tunes.org Subject: FW: paradoxes and intuitionist logic > > so, it sounds like unrestricted excluded middle is just about as > > bad as unrestricted deductive theorem... > Just what do you call "deductive theorem"? I mean the theorem (well it is more of a reasoning pattern than a theorem in a specific system, i guess) that says that if A leads to B then 'A -> B' (where '->' is implication). Equivalently, it can be stated as a set of axioms like this: S: (X -> Y -> Z) -> (X -> Y) -> X -> Z K: X -> Y -> X > Note that removing the rule of excluded middle is precisely > what intuitionnistic logics and constructive logics are all about... > they amount to considering only the objective provability of statements, > not their hypothetical and unreachable (hence meaningless) "truth"... > > > some systems (Coq?) seem to fear this kind of issue and thus formulate > > the set of propositions _outside_ the system; that is, the set of > > propositions cannot be refered to within the system as a first-class > > set. this means a whole static type system is necessary. > Type systems are the way we've done ever since Russell & Whitehead's > Principia Mathematica. hmm... i've obviously missed the boat :-) but it is beginning to look to me like restricting axioms to applicable types (sets, things with a particular property, whatever you want to call it) is a really good idea. in the case of the deductive theorem, the restriction would be to propositions. and the restriction will certainly not always be to propositions. as another example, in a naive formulation of number theory, we might have (in addition to ordinary theorems about equality) these unrestrained theorems: (x+y) + z = x + (y+z) x + (-x) = 0 the (unsound) rationale is that since we made up '+', '-', and '0' we can say anything about their relationship we want to. however, if self-reference is allowed (which must be, if the system has an even half-decent combinatory base), then it is fairly easy to prove that "0 = 1" (and that 0 equals anything, for that matter, and thus everything is equal to everything else). this is because we have an ob "Y x.x+1" which is equal to itself plus one; if the system was suitably restricted, it would be impossible to show that this ob was a number, but in the unrestricted system, problems are bound to occur... > > [static type systems] > > this > > is very messy, and it seems like there must be a better approach. > So I'd like to believe. well, I don't see why the type system couldn't be formulated within the system. For instance, the deductive theorem could be formulated as ordinary axioms in the system like this: all x.all y.all z.prop x -> prop y -> prop z -> (x->y->z)->(x->y)->x->z all x.all y.prop x -> prop y -> x -> y -> x there would then be an unrestrained universal instantiation rule that says if "all f" ('f is a universial set') is proven then "f x" ('f contains x as a member') can be derived (there is no need to restrain the rule, since if "all f" is proven, then it is necessarily a proposition). also there would be unrestrained modus ponens and unrestrained combinator/lambda reduction. this way, there would be no need for an external type system. are there problems with this approach? or is it just that the improvement is considered as merely technical with no practical benefit; this may be so in systems like Coq, but in a reflective system that has to reason about itself, the simpler 'itself' is, the better. regarding the intuitionist rejection of the excluded middle, i ask, "why?". it clearly leads to paradox when unrestrained, but the deductive theorem also leads to paradox when unrestrained yet intuitionists seem to accept it. my question is: are there statements that are propositions in the sense of the deductive theorem but which lead to paradoxes when the excluded middle is also assumed for them. if this can be shown, then it would be clear that the deductive theorem and excluded middle are two separate properties (well, with some overlapping). on the other hand, i am wondering if it can be shown that the excluded middle follows from deductive theorem. > Regards, > >[ "Far_" | VN: __ng-V_ B_n | Join the TUNES project! http://www.tunes.org/ ] >[ FR: Fran_ois-Ren_ Rideau | TUNES is a Useful, Nevertheless Expedient System ] >[ Reflection&Cybernethics | Project for a Free Reflective Computing System ] >The Constitution may not be perfect, but it's a lot better than what we've got! > Thanks for your comments. - iepos From s720@ii.uib.no Thu, 19 Aug 1999 17:53:32 +0200 Date: Thu, 19 Aug 1999 17:53:32 +0200 From: Thomas M. Farrelly s720@ii.uib.no Subject: Runtime invariance (was: Re: What is a kernel?) Jason Marshall wrote: > > "Thomas M. Farrelly" wrote: > > > One important point in TUNES is fine grain modularity. I don't see how a > > monolithic kernel could be fine grained. Wouldn't that be no kernel at > > all, or even, dependent on our definition of kernel, a micro kernel with > > some framework added. > > It can't, but it is my belief that a kernel isn't necessary. I have been > formulating > techniques (someone may well have beaten me to this punch, I really need to do > some in-depth searching on this) for turning loosely-coupled, dynamically loaded > code > into a, well, a temporary monolith, through the idea of runtime invariance. > The point of bringing up A(C,O) was to say something about that, ... > Runtime invariance, in a nutshell (and by my definition), is a special case of > low-volatility > variables within a system of objects. Many robust, modular systems have sets of > features > the user (be it human, or another object) never use, either in a global frame, or > within a > certain class of usage, and yet they pay a premium to have those features > available. A fully introspective system should, over time, be able to determine, > for instance, that the computer > has only one hard drive, so it can eleminate a number of enumerations and thread > safety constructs and just assume one drive. It should be able to determine that > the user is using > iso-latin1 ubiquitously, and short-circuit Unicode support within the system, > > For the 'highly unlikely' contingencies, one can set write-barriers in appropriate > places (this may require some avoidance of mutator inlining) > ... without getting into all those details :) =============================================================================== Thomas M. Farrelly s720@ii.uib.no www.lstud.ii.uib.no/~s720 =============================================================================== From fare@tunes.org Fri, 20 Aug 1999 01:50:42 +0200 Date: Fri, 20 Aug 1999 01:50:42 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: FW: paradoxes and intuitionist logic > I mean the theorem (well it is more of a reasoning pattern than a > theorem in a specific system, i guess) that says that if A leads to B then > 'A -> B' (where '->' is implication). Oh, you mean the ``deduction theorem'' (a metatheorem), don't you? > Equivalently, it can be stated as a set of axioms like this: > S: (X -> Y -> Z) -> (X -> Y) -> X -> Z > K: X -> Y -> X You can deduce a deduction theorem from a system composed of such axioms, but the theorem is a global property of the system that I'm not convinced is invariant when you add new axioms. Note that the deduction theorem can be seen as the fact that you can abstract over any hypothesis, i.e. as the expressibility of a lambda construct in the proof language of your system... > but it is beginning to look to me like restricting axioms to applicable > types (sets, things with a particular property, whatever you want to > call it) is a really good idea. in the case of the deductive theorem, > the restriction would be to propositions. That's the principle of bounded quantification: the variable introduced by a quantifier (forall, lambda, exists, witness) MUST always be bounded by a small enough set, type, etc, so that it be possible to have a well-founded semantics to logical sentences. > we have an ob "Y x.x+1" which is equal to itself plus one; if > the system was suitably restricted, it would be impossible to > show that this ob was a number, but in the unrestricted system, > problems are bound to occur... The classical solution is to have a static type system prevent construction of such paradoxical objects, by ensuring strong normalization of the calculus, such as in Coq (i.e. every term reduces into a normal form in finite time). The disadvantage is that such a type system is incompatible with Turing-equivalence of the calculus. For this reason, some use non-decidable type-systems (such as system F), where the user has to help the system find the right type. The solution I've proposed in my master's thesis (and reused in my lambdaND paper) is to instead remark that termination of evaluation can be used as the ultimate criterion of well-formedness of terms: you thus consider a (non-deterministic) call-by-value lambda-calculus extended with logic primitives, with convergence towards values as the very intuitionnistic "truth" of a logical statement. Instead of having a limited static semantics for "valid obs", you have the fullest possible "dynamic" semantics. > all x.all y.all z.prop x -> prop y -> prop z -> (x->y->z)->(x->y)->x->z > all x.all y.prop x -> prop y -> x -> y -> x In most typed calculi, the type is not an additional condition, but a part of the quantification: all x:prop . In Coq, the above is (x,y,z:Prop)(x->y->z)->(x->y)->x->z > there would then be an unrestrained universal instantiation rule > that says if "all f" ('f is a universial set') is proven then "f x" > ('f contains x as a member') can be derived Would need an additional condition "x well-formed", maybe implicitly given by the quantifier that binds x (in my master's thesis system, all quantifiers were over values, not expressions, although for any expression E, (lambda () E) was a value suitable protecting expression E). > also there would be unrestrained > modus ponens and unrestrained combinator/lambda reduction. > this way, there would be no need for an external type system. > are there problems with this approach? or is it just that the > improvement is considered as merely technical with no practical > benefit; this may be so in systems like Coq, but in a reflective > system that has to reason about itself, the simpler 'itself' is, > the better. I think the system I proposed kind of fits your requirements, although I don't know if it is very practical. > regarding the intuitionist rejection of the excluded middle, i ask, > "why?" Constructivism (see Brouwer). Only consider as "existing" objects that can be readily constructed; only consider as "true" assertions what can be readily proven. This leads to rejecting excluded middle, since (Gödel helping) we know that we cannot (in general) ensure that the unprovable be provably false. > i am wondering if it can be shown that the excluded middle > follows from deductive theorem. Certainly not. The deduction theorem holds in intuitionnistic logic. Regards, [ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ] [ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ] [ Reflection&Cybernethics | Project for a Free Reflective Computing System ] As far as natural selection applies to the human world, we don't ever get to "let nature decide", because we ARE part of that nature that decides. Hence, any claim to "let the nature decide" is just a fallacy to promote one point of view against others, or to ignore one's responsibilities. -- Faré From dem@tunes.org Fri, 20 Aug 1999 09:43:40 -0700 (PDT) Date: Fri, 20 Aug 1999 09:43:40 -0700 (PDT) From: Tril dem@tunes.org Subject: Wanted: Maintainer for the subprojects page Required: How to use CVS (training provided if necessary), basic knowledge of HTML tags, How to edit text files on a unix server (login and edit, or remote transfer and remote edit), How to use e-mail, How to read and write English, How to type on your keyboard. Job Description: Remove all dead subprojects from the page (the old page will be recoverable from CVS if needed), update living subprojects with their current maintainers, keep it updated, add yourself to the list as "Subprojects Subproject", then if you want, you may also do advocacy at trying to get maintainers for subprojects that you think need work done on and sending out messages like this. David Manifold This message is placed in the public domain. From dem@tunes.org Fri, 20 Aug 1999 10:27:33 -0700 (PDT) Date: Fri, 20 Aug 1999 10:27:33 -0700 (PDT) From: Tril dem@tunes.org Subject: "Job" Opportunities On Thu, 15 Jul 1999, Francois-Rene Rideau wrote: > Dear Tunespeople, > since I'll be trying to be a better leader, > I want to define a list of opportunities for potential Tunes contributor. > I'll ask you refine them, and when it's in some good enough shape, > I'll make a web page out of it. Haven't seen any replies to this, hcf just reminded me about it. My opinion is you made the jobs too broad, and nobody has time to commit to any of them. We should write many more, smaller, easier tasks that don't require an ongoing commitment. Then more people will volunteer. Also, you should add a position for overall coordination, to take input from the group and post the consensus of what direction (or conflict of direction) is desired by everyone, then be responsible to "make it happen" by whatever means necessary. > * Web master: > Your job will consist in integrating data input > from the Tunes collaboration channel into a usable web site. > Current TODOs include maintaining the pages better, > making them coherent, finding a web developer (see below). > Constraints: see below Instead of one web master, we need a temporary strike force to find a solution for the current web page, organizing a permanent team to maintain it. The tasks need to be done but it is more likely to work with a group than with one person. > * Web developer: > to help the web master, you'll develop a database for the > Review page, the news page, the Glossary, etc. > Constraints: > 1) operations should be reversible (=versioning), especially > since people of divergent opinions will concurrently modify things > 2) should be eventually migratable to tunes. > Advice: either generate pages off-line from CVS'ed files > as is currently done, or use CL-HTTP for on-line data processing. These is the requirements for the new web page. > * Language developer: > your job will be to write an open compiler for a LISP family language, > that can be integrated into TUNES > Constraints: > 1) should quickly be free of C code to allow RTCG on the bare hardware, > as well as easier code analysis. > 2) should work on the bare hardware (retro, clementine, whatever) > as well as OTOP > Advice: once you define a suitable target virtual machine, > you can parallelize (with the help of other people) development > of the compiler and of the runtime support. > In a first time, you may assume read from your Scheme (or CL) implementation. > Later, you could reuse the parsing algorithm described in the CLHS. This sounds like an enormous job. Maybe find another lisp project (rscheme) that is close and recruit people from it or try to convince them to work with you? > * OS developer: > your job will be to grok and integrate oskit drivers > (or directly linux drivers) into retro, clementine, > or another dynamic component-based infrastructure. Not sure how important this is. Shouldn't retro and clementine owners do their own recruiting if they want it? > * Language Theorist: > your job will be to help formalize the theory of reflective languages. > Contact fare@tunes.org if you have the proper qualifications, or are willing > to acquire them within 6 months. Maybe another strike force to FIND people from the academic community who are willing to publish online instead of in proprietary journals, or to work with TUNES. David Manifold This message is placed in the public domain. From dem@tunes.org Fri, 20 Aug 1999 14:47:00 -0700 (PDT) Date: Fri, 20 Aug 1999 14:47:00 -0700 (PDT) From: Tril dem@tunes.org Subject: Volunteers for a coordination group TUNES needs to be more open to ideas from everyone. To do this, in IRC (http://www.tunes.org/files/irc/1999.0820) we have come to an agreement that we need a group of people dedicated to coordination of the TUNES project. In order not to leave anyone out, we bring it to the list so that anyone who wants to BE IN the coordination group may volunteer. E-mail me or reply to this thread within one week's time...although you can probably join later. The qualifications are, basically, that you must not have too strong an opinion about what to do, and you are willing to consider other people's ideas, and you won't flame anyone or discourage them from communicating their ideas. This group will be responsible for receiving ideas about the progress of the project. If it does not receive them, it will go out of its way to get some, by sending repeated e-mails, asking on IRC, and any other way necessary. The ideas will be collected together and the group will post a summary. What to do with the ideas? Well, that's a good question. So the group will be responsible for asking that question, facilitating ideas about it, and posting the summary of those ideas. If the second summary doesn't clearly explain what to do with the first summary, then a third batch of ideas is required. The group will repeat the process in as many iterations as necessary until it is clear what to do. Ideas about "it's about time to stop this process" are also welcome, and can be included at any time. The coordination group will also be responsible for carrying out the ideas, or arranging for them to be carried out in some way. David Manifold This message is placed in the public domain. From iepos@tunes.org Fri, 20 Aug 1999 16:17:16 -0700 (PDT) Date: Fri, 20 Aug 1999 16:17:16 -0700 (PDT) From: iepos@tunes.org iepos@tunes.org Subject: FW: paradoxes and intuitionist logic > > we have an ob "Y x.x+1" which is equal to itself plus one; if > > the system was suitably restricted, it would be impossible to > > show that this ob was a number, but in the unrestricted system, > > problems are bound to occur... > > The classical solution is to have a static type system > prevent construction of such paradoxical objects, I'm not really sure what you mean by "construction" of an object; i guess you mean that the type system forbids the object from being represented at all, since it is deemed "meaningless". To me, this seems like a horrible hack (albeit maybe a useful one). > The solution I've proposed in my master's thesis > (and reused in my lambdaND paper) is to instead remark > that termination of evaluation can be used > as the ultimate criterion of well-formedness of terms: > you thus consider a (non-deterministic) call-by-value lambda-calculus > extended with logic primitives, with convergence towards values > as the very intuitionnistic "truth" of a logical statement. Hmm... I don't think I quite understand how your system would work. Surely there will be expressions with normal forms that represent absurd statements (or non-statements, like numbers and sets), right? Anyway, your system is more ambitious than what I've been looking for, in that it is not only a logic system (axioms and rules of inference) but a way for the logic to be carried out (i.e., a calculus). This doesn't seem like a bad approach, since a calculus is going to be necessary at some point if we want computers to be able to do automated reasoning. Anyway, I'll have to look back over your paper, since I think I got lost near the end last time (:-)). > > > all x.all y.all z.prop x -> prop y -> prop z -> (x->y->z)->(x->y)->x->z > > all x.all y.prop x -> prop y -> x -> y -> x > In most typed calculi, the type is not an additional condition, > but a part of the quantification: all x:prop . > In Coq, the above is (x,y,z:Prop)(x->y->z)->(x->y)->x->z That's right. But there seems to me to be interest in a system in which a condition is not a necessary part of quantification (or if it is, it is at least a "first-class" type, that can be talked about within the system). > > there would then be an unrestrained universal instantiation rule > > that says if "all f" ('f is a universial set') is proven then "f x" > > ('f contains x as a member') can be derived > Would need an additional condition "x well-formed", > maybe implicitly given by the quantifier that binds x > (in my master's thesis system, all quantifiers were over values, > not expressions, although for any expression E, (lambda () E) > was a value suitable protecting expression E). I think you may have misunderstood what I meant by "all f". I mean what is sometimes written as "for all x, f(x)"; for sanity's sake, i mean to use a system based on combinators instead of bound variables, so there is no possibility for clashes. By "all", I mean Curry's "universal generality" which he writes using a symbol that looks kind of like pi. Anyhow... I don't see the need for the "x well-formed" condition, In fact, if it rules out 'x's involving paradoxes (applications of Y and kin, things that may not have normal forms), then the resulting system is unacceptable to me, since it can occasionally be useful to talk about these paradoxes (for instance, so that the system itself could state paradoxes' paradox-hood). remember that i hope to prevent faulty reasoning about paradoxes by restricting specific reasoning patterns rather than tossing them out of the system entirely. > > also there would be unrestrained > > modus ponens and unrestrained combinator/lambda reduction. > > this way, there would be no need for an external type system. > > are there problems with this approach? or is it just that the > > improvement is considered as merely technical with no practical > > benefit; this may be so in systems like Coq, but in a reflective > > system that has to reason about itself, the simpler 'itself' is, > > the better. > I think the system I proposed kind of fits your requirements, > although I don't know if it is very practical. hmm.. well your lambdaND approach did avoid external types, didn't it? i still don't really understand how logic would take place in that system, but i'm still certainly interested in it... > > > regarding the intuitionist rejection of the excluded middle, i ask, > > "why?" > Constructivism (see Brouwer). > Only consider as "existing" objects that can be readily constructed; again, not sure what you mean by "construct". if you mean "represent finitely in some system", then many paradoxical objects would be taken to "exist", since they can be represented finitely in many systems using Y. anyway, i usually take existence to be at-least-one-memberness (of a set); that is, i say that set F exists iff "all x.prop x -> (all y.F y -> x) -> x". but i think by "exists" in this case you mean "well-formed-ness", which seems irrelevant to me... > only consider as "true" assertions what can be readily proven. > This leads to rejecting excluded middle, > since (G_del helping) we know that we cannot (in general) > ensure that the unprovable be provably false. > > > i am wondering if it can be shown that the excluded middle > > follows from deductive theorem. > Certainly not. The deduction theorem holds in intuitionnistic logic. uhh... that does not mean that the excluded middle can't be derived, when stated in a specific form perhaps. as i understand it, there is no known intuitionist refutation of the excluded middle (no counter-example, at least when the deductive theorem holds); they merely do not naively accept it. anyway, if the deductive theorem holds universally in the logic, then something is clearly wrong with it, given all the paradoxes that can be derived... but i suppose they work around that by basing their system on a static type system, or bound variables (ooh, what fun :-)). > > Regards, > >[ "Far_" | VN: __ng-V_ B_n | Join the TUNES project! http://www.tunes.org/ ] >[ FR: Fran_ois-Ren_ Rideau | TUNES is a Useful, Nevertheless Expedient System ] >[ Reflection&Cybernethics | Project for a Free Reflective Computing System ] > As far as natural selection applies to the human world, we don't ever get to > "let nature decide", because we ARE part of that nature that decides. Hence, > any claim to "let the nature decide" is just a fallacy to promote one point > of view against others, or to ignore one's responsibilities. > -- Far_ heh heh. i really should stop worrying about all this theoretical stuff and work on a real system, but anyway, this stuff does seem sort of important, or it will be eventually... - iepos From btanksley@hifn.com Fri, 20 Aug 1999 16:44:42 -0700 Date: Fri, 20 Aug 1999 16:44:42 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: FW: paradoxes and intuitionist logic > From: iepos@tunes.org [mailto:iepos@tunes.org] > Subject: Re: FW: paradoxes and intuitionist logic > Anyhow... I don't see the need for the "x well-formed" condition, > In fact, if it rules out 'x's involving paradoxes (applications > of Y and kin, things that may not have normal forms), then > the resulting > system is unacceptable to me, since it can occasionally be > useful to talk > about these paradoxes (for instance, so that the system itself could > state paradoxes' paradox-hood). remember that i hope to > prevent faulty > reasoning about paradoxes by restricting specific reasoning patterns > rather than tossing them out of the system entirely. It's all good and well to state that something is a paradox, but once you've done that you can't use boolean logic, and more than you can consider the set of all sets. Naive set theory is bogus. So is naive logic. If you want to really play with paradoxes, you have to switch to fuzzy logic. > - iepos -Billy From martelli@iie.cnam.fr 23 Aug 1999 15:45:49 +0200 Date: 23 Aug 1999 15:45:49 +0200 From: Laurent Martelli martelli@iie.cnam.fr Subject: microkernels >>>>> "JS" == shapj writes: JS> If you have an example of an academic who has laughed at JS> monolithic designs when research on them has been proposed, make JS> that known. If you do NOT have several concrete examples, then JS> your text is an undeserved slur on a lot of people, and it is a JS> fundamentally dishonest thing to say. You are certainly JS> entitled to that opinion, but a "glossary" is a place for JS> statements of fact, not for opinions. "To me, writing a monolithic system in 1991 is a truly poor idea." (Andy Tanenbaum ) -- Laurent Martelli martelli@iie.cnam.fr From water@tscnet.com Tue, 24 Aug 1999 16:52:52 -0700 Date: Tue, 24 Aug 1999 16:52:52 -0700 From: Brian Rice water@tscnet.com Subject: Arrows in steps At 01:33 PM 8/24/99 -0400, you wrote: >I just finished reading the Arrows paper in totality (finally). I'd >have to say, the more I read, the more it seemed like the same thing I >wished to create with UniOS, actually even before UniOS (but was >obviously talked out of doing, and rightfully so as I didn't have a clue >of how to make it happen). I have a few random comments, which would >work well within your system, but unfortunatly you're going to have to >answer them without it's assistance ;) very cool. thanks. >1) Practical application #1: Creating Ontonologies for every major >processor, architecture, OS, and environment, along with basic >programming theory and math (in that order). In this way you'd be able >to analyse a bitstream (program) and have it recompose it into another >form. For example taking a Windows program, and making it KDE on some >unix variant (due to there similarities in capability). There would be >a very huge demand for something of this nature, and may even be a money >making opportunity. Or the concept of "dedicated servers", in which an >entire OS environment with a single purpose (web serving, FTP, etc.) >could be created for almost any modelable purpose. This would totally >replace the need for "jack-off-all trades" type OS's (NT, Unix). >Companies usually only need certian capabilities, why not implement them >in the most efficent way possible for the given hardware. well, that's quite a lot of work to do, but then there are many programmers to be thrown around these days. the trick of course is to convince them to throw themselves at your own tasks. my focus is more related to ontologies that provide generic frameworks, and to use those to develop ones specific to a processor, etc. also, one big limitation on the ontology notion that i suggest is that translating between various ontologies is very often not computable or simply infeasible. also, if the user requests a translation, then the computing system needs to ask the right questions of the user to construct the desired kind of translation. >2) Big assumption #1: Any binaries the system creates to run (I assume >it can't be interpreted all the time), will be fitted exactly to the >system and it's environment. If not, then I imagine this is something >the system would excel at, and should not be ruled out. yes, this is very similar to the tunes idea of partial evaluation of code in steps until actual threaded code is achieved which can be run without interpretive overhead or even kernel overhead (per se). the evaluations would be specific to the current or desired environment and could be dynamically modified. >3) I'm not sure if user interface is really a valid point to bring up >in the docuement. I think the system and it's implementation(s) are >totally seperate issues (unless you were stating it for the purpose of >showing that they are irrelevent?). they aren't separate issues for me, since i am considering info systems as self-sustaining entities, which means that the system's capabilities would be reflective and that the implementation is a necessary part of explaining what the system is (should be) useful for. >4) I'm wondering what would be the drawback of implementing the whole >system as a text based storage system in which arrows could be >represented by textual statements rather than binary operators, making >the data files more readable, and giving the "alpha" system a static >textual command driven interface. You could implement the whole thing >in a C'ish language and be able to start creating the data right away >(including the model for the real system). In essence, I'm saying that >maybe there is a way to implement a kludge system that could be used to >create and manipulate the arrow frames (in whatever format you choose), >so that you could get on to making and manipulating arrow frames rather >than worrying about the chicken and egg implementation problem. the simplest system would declare CONS cells for arrows (and chained CONS's for multi-arrows). of course, nesting is a nice syntactic convention, but in the arrow system it is an unnecessary (and undesired) restriction of the potential name-space for arrows. so, the expression syntax would be "A = (B C)", and since the system is reflective, the "=-application" is actually also available as an arrow just as the CONS cell is for the application of a function to an argument. this concept is enough to model as much of the system as can be finitely described. basically, restricting CONS cells to only point to other CONS cells, as well as casting all the elements of an arrow textual specification as CONS cells, is sufficient for now to encode arrow information. the user/coder should always keep in mind the current ontology that they desire to build. the system of course will eventually be capable of analyzing such a development at a fine-scale, able to describe the intermediate states of ontologies (as they are built) as other ontologies. all that is required of an ontology is that it's elements providing meaning can be grouped together, which is relative to other ontologies (say, requiring ontologies to be consistent systems of predicates within a logic). one thing to add: arrows are epistemic constructs and ontologies are built from them. this is the philosophical view on the system's conceptual strategy. >5) I imagine in a full scale implementation of the system, there would >be 3 distinct parts: The Arrow Knowlege Base, the Operating >Environment, and the binaries that run. I assume this is the logical >breakdown of how this system must work. The operating environment, >ideally, was modeled in Arrows, and is (truly) portable across all >systems that use the same basic interfaces. here i assume that you refer to people gathering together and agreeing on standards for encoding arrow information, except that instead of explicitly declaring each arrow, the declarations assume some ontology. this of course is good, but should be fluid and dynamic, to allow these interfaces to adapt and evolve to new uses, etc. >6) Big Assumption #2: You could model and do logical calculations on >problems that aren't conciveable using normal operators found within >normal systems. Like infinite recursion as something useful... actually >the whole concept of infinity I guess. well, the real benefit is not just about infinities, because ordinary logic can do a lot of that. the real advantage is that the arrow system can talk about such things in arbitrary ways (i.e. not limited to standard kinds of predicate logic, etc.). but then, this is a great improvement over computer languages, because they restrict expressions to those which are algorithmic in nature. in arrow, you can encapsulate ideas which are not algorithmic, but which may be calculable when described from another perspective. it's this framework that is inherently independent of concerns for calculability which allows the user to study relationships that other systems would ignore, though they are useful (even applicable to computational systems). >7) Here's another idea that might have financial merit: The Arrows >Knowlege base is not local to the machines, but only accessable through >the internet, and essentially software publishers model their software >and provide the essential data to create the binaries. However the >entire thing must draw from this internet Arrow-base. In essence you >keep the whole of the knowlege to yourself and charge a fee for access >(one time free hopefully). Actually I'm not sure about this one >anymore, but I'll leave it here for you to see anyways. well, i don't want to encourage centralization because it is such a natural (read: addictive) tendency of social groups, and can go overboard. however, your ideas are similar to the tunes metaprogramming concepts, and of course arrow supports this in a certain way (which i intend to show is much more general and much more potentially useful). i also intend this system to promote information freedom in a way similar to the bazaar model. my intent with this system is to provide unity for the space of information that people create, in order that ventures farther away from the status quo would not be seen as dangerous. my hopes are that this process will actually promote a unified diversity of human interests as well as promoting utility with respect to that scheme. >8) The idea of modeled graphics, sound, along with algorithyms is >great. I envisioned a tetris type game where the basic logic model >existed, and all the graphics for the game were done in something like a >pov-ray type language and the sounds something similar, and when >installed it fits itself to the evironment. Truly scaleable programs... >what a concept... For example if you're running a PDA or a Game Boy >type system, the graphics would be rendered (only when installed) in a >greyscale type format, in a very small block size, and the sound would >be low in size and hz. However if the user had a screen capable of >1800x1600x32 bits, and had a sound card that did 7000 simultenous voices >at 48,000hz in AC-5 format, then the game would scale to fit that type >of system. Of course the sounds would have to be simple (or if predone, >have to be shipped in a very good quality and scaled from there), and >the same goes for the graphics, they'd have to be vector (2d or 3d) or >if bitmapped, must be in high-res to scale down from. Anyways, point >is, modeled games = good, static games = good also because they can be >converted as in point #1. most of your comments fall under the notion of meta-programming, but i believe that you intend more (as i do). the ontology concept allows high(or whatever)-level modelling of a system, and potentially the transformation of those models into other models. with tunes, the high-level description is "meta-programming", which suggests an implicit context of programming, a problematic domain for information-sharing. in other words, the ontologies for a meta-programming system would all address one paradigm: the programming process. instead, i propose that this implicit multiplexing of concepts through the programming paradigm is too restrictive, in that it forces all declared constructs to be computable (processed by the machine alone). the alternative is to make such multiplexing explicit (placing it within a larger framework of information transitions) >9) Multi-headed/tailed arrows... I know these are necessary, however >I'm not sure how this is going to affect garbage collection... or if >garbage collection should even be done. Imagine a program written for a >specific ontonology, but then due to garbage collection, the ontonology >gets canned because it's represented in another form somewhere else... >cleans up the database but messes up the model. Is this even possible? >10) Practical Application #2: Language barriers. This system could be >used as a universal translator for human language, even from a voice >sample. Geeze, can't see any practical application for that...hehe shhh... :) (of course it will still take a lot of thought to put into a framework for langauges, but then i've been researching linguistics all along. so, yes, i do have plans in that direction) >11) Another thought: There is no argument that this system as just >another system that "re-invents the wheel". It's not re-inventing, it's >analysing the wheel, then representing it in a different format, and >then mass producing wheels of all conciveable varieties. hehe... not a bad analogy. however, i'm not sure if it could be used to describe the use of the epistemology vs ontology idea and the notion of relativism as it applies. >Hmm... that's it for tonight I guess, sorry it's so mangled I was tired >and a bit excited when I wrote it all, I'll have more later, or when you >reply. > >Pat Wendorf >beholder@unios.dhs.org thanks for the feedback From martelli@iie.cnam.fr 25 Aug 1999 13:38:34 +0200 Date: 25 Aug 1999 13:38:34 +0200 From: Laurent Martelli martelli@iie.cnam.fr Subject: What is an aspect ? >>>>> "TMF" == Thomas M Farrelly writes: TMF> I try again: TMF> Which of the following does not fit in a microkernel: I/O TMF> processes bootcode ADT's GUI the stack the heap security memory TMF> management ( GC ) networking minesweeper I think dividing things into kernel/user-space is wrong. This is not a good way of seeing things. I'd rather talk about aspects. "Minesweeper" would belong to the functional aspect, where you *describe* the services available to the user of the system. Security would be another aspect, describing which services are available to which users. You don't need to know about security to describe the Minesweeper service. And you shouldn't have to think about a particular GUI neither. The GUI could be a "click-and-play" one, or it could be a one line text screen. Memory management, networking and others are just runtime optimisation issues. If you execute the minesweeper on a single machine, you don't have to care about network. And obviously, the minesweeper should not be network aware. But it should work in a network environment if needed. To achieve this, only the interpretor running the minesweeper need to be network aware. That's why I think the question is not "What is a kernel", but rather what is an aspect. -- Laurent Martelli martelli@iie.cnam.fr From alaric@alaric-williams.com Wed, 25 Aug 1999 14:31:49 +0100 (BST) Date: Wed, 25 Aug 1999 14:31:49 +0100 (BST) From: Alaric Williams alaric@alaric-williams.com Subject: "Job" Opportunities On Fri, 20 Aug 1999, Tril wrote: > Haven't seen any replies to this, hcf just reminded me about it. > My opinion is you made the jobs too broad, and nobody has time to commit > to any of them. We should write many more, smaller, easier tasks that > don't require an ongoing commitment. Then more people will volunteer. We could also turn it the other way around and see what we have to work with: I could spare a few hours a week, but not on any particular day of the week etc. since my schedule shifts around a lot. I know Unix, PHP3, CVS, Apache, network security Web development in general (it's my job) I have my own server machine to play with (which is the current UK tunes.org mirror :-), so I have some useful resources. I like pizza. I get on well with people. I am not photogenic. ABW ---==========================[ http://www.alaric-williams.com/ ]============--- Almost three thousand years ago, I was on the comittee who designed Christianity... but it took us nine hundred years to get it past the risk assessment subcomittee :-( ---=========[ alaric@alaric-williams.com ]==================================--- From dufrp@oricom.ca Wed, 25 Aug 1999 14:04:45 -0400 Date: Wed, 25 Aug 1999 14:04:45 -0400 From: Paul Dufresne dufrp@oricom.ca Subject: Tunes core existed in 1991 He he he, I admit this subject is to gather attention. But that's not really a lie either. Reading "A Lisp Through the Looking Glass" at http://www.cb1.com/~john/thesis/thesis.html I really have the feeling of someone explaining his Tunes implementation. Actually I just finished the Introduction (Chapter 2 of 15) and so I can't give much details on the how. But it's about writing a meta-evaluator for a reflective tower of evaluator, where each evaluator in the tower, evaluate the program under it (an evaluator for the program under it). Well, I hope I am right but it's not a problem if I am wrong because you will read it by yourself. :-) Up to now in my reading, I can say it's explaining quite well the concept of reification and reflection, that we often call both reflection. Just that would make it a good reading. Now, an interpreter is the combination of a language description and an evaluator. I have read between the lines, that by having reifier operators parametrized by a language description, and using this reflective tower, he have been able to do an implementation (Platypus) able to let you mixed languages. Quite what I understood as being the core of Tunes. Now the concept of a reflective tower go back to Smith in 1982 according to this. So I guess it is not much new in itself. As far as I know, the Metakit also have such a reflective tower. (That would be cool that someone having a Mac tried this Metakit and do some comments about it.) But I guess this thesis goes much on the line of what Tunes is looking for, that is a reflective environment to evaluate many programming languages. Maybe there is other stuff since 1991 that are even closer of what of Tunes want, but somehow I doubt it. So, the question I will probably ask is, why not just go in using this work? Paul Dufresne From gonz@ratloop.com Wed, 25 Aug 1999 14:27:17 -0400 (EDT) Date: Wed, 25 Aug 1999 14:27:17 -0400 (EDT) From: Pete Gonzalez gonz@ratloop.com Subject: Tunes core existed in 1991 On Wed, 25 Aug 1999, Paul Dufresne wrote: > Reading "A Lisp Through the Looking Glass" at > http://www.cb1.com/~john/thesis/thesis.html I really have the feeling > of someone explaining his Tunes implementation. BTW there's a postscript version here: http://www.cb1.com/~john/thesis/thesis.ps.gz -Pete From jmarsh@serv.net Wed, 25 Aug 1999 12:51:26 -0700 Date: Wed, 25 Aug 1999 12:51:26 -0700 From: Jason Marshall jmarsh@serv.net Subject: What is an aspect ? Laurent Martelli wrote: > I think dividing things into kernel/user-space is wrong. This is not > a good way of seeing things. I'd rather talk about > aspects. "Minesweeper" would belong to the functional aspect, where > you *describe* the services available to the user of the > system. Security would be another aspect, describing which services > are available to which users. You don't need to know about security to > describe the Minesweeper service. And you shouldn't have to think > about a particular GUI neither. The GUI could be a "click-and-play" > one, or it could be a one line text screen. Memory management, > networking and others are just runtime optimisation issues. Bravo. This pretty well expresses my opinion on the subject of how a reflective 'safe' environment should operate. Each piece of software may (and likely, will) see a different facade that represents 'everything else' currently accessible to it. That facade essentially amounts to a list of services (and depending how you frame the picture, clients) that it can interact with. What one entity sees as the networking services, or persistant storage services, may be drastically different from what another entity within the system sees. > execute the minesweeper on a single machine, you don't have to care > about network. And obviously, the minesweeper should not be network > aware. But it should work in a network environment if needed. To > achieve this, only the interpretor running the minesweeper need to be > network aware. Exactly. If one were to load a more computationally intensive game, like Chess, it doesn't need to see the network (unless it's a multi-player game), but the runtime may decide to farm off some speculative move calculations (looking ahead at the next move while the opponent is considering his own move) to the cluster of SGI's running in the next room. The chess program doesn't even know this is going on, because the runtime is acting as a trusted intermediary; it knows how to access the net without giving undue priveledge to the program on whose behalf it is acting. If the calculation isn't conducive to this sort of parallelism, the runtime may still be able to ask trusted machines to either perform optimizations for it (assuming you can totally trust native code that comes to you over a NIC), or ask them to test out highly speculative optimizations for you, and report on their success or failure (a bit of a brute-force method of optimization). > That's why I think the question is not "What is a kernel", but rather > what is an aspect. Well, there still exists in this notion of a 'runtime' some qualities that could be arguably attributed to a kernel, could they not? Regards, Jason Marshall From gonz@ratloop.com Thu, 26 Aug 1999 01:54:02 -0400 (EDT) Date: Thu, 26 Aug 1999 01:54:02 -0400 (EDT) From: Pete Gonzalez gonz@ratloop.com Subject: Tunes core existed in 1991 On Wed, 25 Aug 1999, Paul Dufresne wrote: > Actually I just finished the Introduction (Chapter 2 of 15) and so I > can't give much details on the how. But it's about writing a > meta-evaluator for a reflective tower of evaluator, where each > evaluator in the tower, evaluate the program under it (an evaluator > for the program under it). Well, I hope I am right but it's not > a problem if I am wrong because you will read it by yourself. :-) Hmm... judging from his web page, it appears he hasn't done any further work on the project since 1991, which isn't a great sign. Also, like you I found that by Chapter 2 he still hadn't actually given anything except vague formalisms. Hopefully this is a consequence of it being a thesis (i.e. the political need to look like a lot of work =) ), but really if it's the Right Thing it should be possible to clearly explain the gist of the design in just a few pages. (This is historically true of any major language innovation -- COM, OOP, exception handling, functional programming -- it takes years to discover, but only a short explanation and an example to clearly present.) The closest "gist" I could find was this: http://www.cb1.com/~john/research/PhD/PhD.html Maybe I'm just still to new to all of this, but I had a lot of trouble seeing anything revolutionary there. -Pete From dufrp@oricom.ca Thu, 26 Aug 1999 07:28:57 -0400 Date: Thu, 26 Aug 1999 07:28:57 -0400 From: Paul Dufresne dufrp@oricom.ca Subject: Tunes core existed in 1991 On Thu, Aug 26, 1999 at 01:54:02AM -0400, Pete Gonzalez wrote: > On Wed, 25 Aug 1999, Paul Dufresne wrote: > Hmm... judging from his web page, it appears he hasn't done any further > work on the project since 1991, which isn't a great sign. he he he, but there is much more recent things going on with the same kind of approach, read further. >Also, like you > I found that by Chapter 2 he still hadn't actually given anything except > vague formalisms. Hopefully this is a consequence of it being a thesis > (i.e. the political need to look like a lot of work =)) I like that because he explain things for a newbie like me. I think he explain with Common Lisp code quite clearly the how on following chapters, it's just that I am not sure I am bright enough to follow. >but really if > it's the Right Thing it should be possible to clearly explain the gist of > the design in just a few pages. I guess so, chapter 3 describes how it includes in the closure of a procedure (or function), the interpreter with which to evaluate the closure. > Maybe I'm just still to new to all of this, but I had a lot of trouble > seeing anything revolutionary there. > I think that's me that find this good because I am new to this, and you find this unrevolutionnary because you are already aware of other stuff like this. Now doing on http://fermivista.math.jussieu.fr a research with "+reflect* +tower +interpreter" I found 21 documents speaking of that kind of stuff. Like Refci language. I guess Fare was already aware of these, but they need to make their place on the links of our Reflection page. -Paul From alexis.read@ucl.ac.uk Thu, 26 Aug 1999 13:51:24 +0000 Date: Thu, 26 Aug 1999 13:51:24 +0000 From: Alexis Read alexis.read@ucl.ac.uk Subject: Tunes core existed in 1991 >Now doing on http://fermivista.math.jussieu.fr a research >with "+reflect* +tower +interpreter" I found 21 documents speaking >of that kind of stuff. Like Refci language. > >I guess Fare was already aware of these, but they need to make their >place on the links of our Reflection page. Another good language for reflection is maude, but I've heard no-one discussing it, either for or against (any comments Fare?). You can find it, along with tutorials and reflection papers at: http://maude.csl.sri.com I've also done a summary at: http://www.ucl.ac.uk/~zccap74/lang.htm (just click on maude) Alexis Read From dufrp@oricom.ca Thu, 26 Aug 1999 08:59:25 -0400 Date: Thu, 26 Aug 1999 08:59:25 -0400 From: Paul Dufresne dufrp@oricom.ca Subject: Maude: a tower-reflective meta-language Now understanding better about reflection, thanks to "A Lisp, Looking Through the Looking Glass", I came back to Maude manual to discover that Maude too is of the kind reflective-tower, not just flat-reflective. And according to the Manual, "...the most interesting application of this module are metalanguage applications, in which Maude is used to define the syntax, parse, execute, and pretty-print the execution results of a given object language or tool." So, it looks like to me that very recents language like Maude already have most of what we'd like to implement in a Scheme-like language, except it is not based on Lambda-calculus but on a rewriting-logic. It has tower-reflection, GC (I think), persistence. Now I need to get an implementation running on my computer and begin to play with this, although it is not an easy language, I have trouble with the evaluation strategy, I seems to never guess the results of the examples of the manual rightly. I also found that part of the manual is a good prerequisite for the reading of "A crash course of Arrow Logic", that is itself a prerequisite for Brian Arrow System. :-) -Paul From m.dentico@teseo.it Thu, 26 Aug 1999 19:56:32 +0200 Date: Thu, 26 Aug 1999 19:56:32 +0200 From: Massimo Dentico m.dentico@teseo.it Subject: SLK: The Safe Language (No-)Kernel Project Follow a quote (emphasis is mine) on a no-kernel OS, from CS department of Cornell University. Sorry, but I'm incapable to review for lack of time (I need much time to write decently in English). --------------------------------------------------------------- SLK: The Safe Language Kernel Project (http://www.cs.cornell.edu/slk/) [..] SLK relies on the properties of type-safe languages in order to enforce protection boundaries between applications and the OS itself which means that all code can run in a single address space and at a single hardware privilege level. The first version of SLK is heavily Java based but a significant part of our research effort lies in understanding how to host multiple languages. For example, we plan to integrate ML into the family of languages supported by SLK. The most fundamental difference between the Secure Language Kernel (SLK) and a traditional operating system is the fact that the entire system runs in a single address space and at a single hardware protection level. There is no memory management hardware that prevents one application from accessing another's memory and there is no hardware privilege mode differentiating instructions executed in the kernel from those executed in an application. Instead, all protection is enforced by the language system. Languages used under SLK must be type safe and the compiler must provide enough information to the run-time system to allow protection boundaries to be enforced. The motivation for relying on software for protection is threefold: light weight, seamless extensibility, and flexibility in the form of fine grain sharing. Under the assumption that the language system can enforce protection, it is natural to propagate this new property through the system and eliminate redundant functionality in an attempt to reduce complexity and improve efficiency. This is also the primary technical motivation behind Sun's upcoming JavaOS [Mad], but unlike JavaOS, SLK focuses on servers and fine-grain sharing of data and code across protection boundaries. In this sense it continues a decade-long trend in OS design in moving functionality into user-level and generally blurring the user-kernel boundary. **SLK removes the user-kernel boundary ENTIRELY.** [..] -- Massimo Dentico From m.dentico@teseo.it Sun, 29 Aug 1999 12:36:39 +0200 Date: Sun, 29 Aug 1999 12:36:39 +0200 From: Massimo Dentico m.dentico@teseo.it Subject: Wegner OOPSLA'95, Interection vs Turing Machines (was: so-called Turing-Equivalence) François-René‚ Rideau wrote: > > >: Tim Bradshaw on comp.lang.lisp > > Another formalism of equivalent power to a UTM is the lambda > > calculus > I'm _sick_ of hearing such a meaningless statement of "being of > equivalent power to a UTM" repeated over and over again as a bad > excuse for bad programming language design. > > I question the meaningfulness of your term "Turing-equivalence". > What definition of it do you use, if any? > Equivalent _up to what transformation_? > Do these transformation correspond to anything _remotely_ > meaningful *in presence of interactions with the external world > (including users)*? I think that on this subject the paper of Peter Wegner is illuminating. Follows a quotation of the abstract and of the paragraph 2.3 because the original text is quite long (67 pages) and in this way you could have by yourselves an idea of the content and you could decide if you want to read it entirely. However, I hope to don't annoy anyone with this long citation. Sorry for my horrible English, any correction is wellcome. ================================================================== OOPSLA Tutorial Notes, October 1995 Tutorial Notes: Models and Paradigms of Interaction Peter Wegner, Brown University, September 1995 (http://www.cs.brown.edu/people/pw/papers/oot1.ps) Abstract: The development of a conceptual framework and formal theoretical foundation for object-based programming has proved so elusive because the observable behavior of objects cannot be modeled by algorithms. The irreducibility of object behavior to that of algorithms has radical consequences for both the theory and the practice of computing. Interaction machines, defined by extending Turing machines with input actions (read statements), are shown to be more expressive than computable functions, providing a counterexample to the hypothesis of Church and Turing that the intuitive notion of computation corresponds to formal computability by Turing machines. The negative result that interaction cannot be modeled by algorithms leads to positive principles of interactive modeling by interface constraints that support partial descriptions of interactive systems whose complete behavior is inherently unspecifiable. The unspecifiability of complete behavior for interactive systems is a computational analog of Gödel incompleteness for the integers. Incompleteness is a key to expressing richer behavior shared by empirical models of physics and the natural sciences. Interaction machines have the behavioral power of empirical systems, providing a precise characterization of empirical computer science. They also provide a precise framework for object-based software engineering and agent-oriented AI models that is more expressive than algorithmic models. Fortunately the complete behavior of interaction machines is not needed to harness their behavior for practical purposes. Interface descriptions are the primary mechanism used by software designers and application programmers for partially describing systems for the purpose of designing, controlling, predicting, and understanding them. Interface descriptions are an example of "harness constraints" that constrain interactive systems so their behavior can be harnessed for useful purposes. We examine both system constraints like transaction correctness and interface constraints for software design and applications. ------------------------------------------------------------------ [...] 2.3 Robustness of interactive models The robustness of Turing machines in expressing the behavior of algorithmic, functional, and logic languages has provided a basis for the development of a theory of computation as an extension of mathematics. Interaction machines are an equally robust model of interactive computation. Each concept on the left-hand side of figure 24 has greater expressive power than the corresponding concept on the right-hand side. Moreover, it is conjectured that left-hand-side concepts have equivalent expressive power captured by a universal interaction machine, just as right-hand-side concepts have equivalent expressive power of a Turing machine (universal algorithm machine). ... has richer behavior than ... universal interaction machine universal algorithm machine interactive problem solving algorithmic problem solving open system closed computing system programming in the large > programming in the small object-based programming procedure-oriented programming distributed AI logic and search in AI scientific modeling paradigm mathematical reasoning paradigm philosophical empiricism philosophical rationalism robust equivalence of behavior Figure 24: Parallel Robustness of Interaction and Turing Machines Interaction machines provide a common modeling framework for left- hand-side concepts just as Turing machines provide a common framework for right-hand-side concepts. The equivalence of the first five right-hand-side entries is the basis for the robustness of Turing machines. The corresponding left-hand-side entries are less familiar, and there has not until recently been a crisp notion of expressive power for interactive problem solving, open systems, programming in the large, object-oriented programming, or distributed artificial intelligence. Interaction machines provide a crisp model for these concepts and allow the equivalence of behavior and of interactive problem-solving power to be expressed as computability by a suitably-defined universal interaction machine. The somewhat fuzzy notion of programming in the large can be precisely defined as interactive programming. Large entirely algorithmic programs of one million instructions do not qualify, while modest interactive systems with a few thousand instructions do. The identification of programming in the large with interactive programming implies that programming in the large is inherently nonalgorithmic, supporting the intuition of Fred Brooks that programming in the large is inherently complex. Interaction machines provide a precise way of characterizing fuzzy concepts like programming in the large and empirical computer science and elevate the study of models for objects and software systems to a first-class status independent of the study of algorithms. The role of interaction machines in expressing the behavior of scientific models and empiricism is less direct than their role in expressing open systems, object-based programming, and distributed artificial intelligence. The identification of interactive problem solving with the scientific (empirical) modeling paradigm follows from the correspondence between interaction and observability (interaction from the point of view of an agent or server corresponds to observability by an external client). Interaction machines express processes of observation and capture the intuitive notion of empiricism. Moreover, the logical incompleteness of interaction machines corresponds to descriptive incompleteness of empirical models. Modeling by partial description of interface behaviors is normal in the physical sciences. The incompleteness of physical models is forcefully described by Plato in his parable of the cave, which asserts that humans are like dwellers in a cave that can observe only the shadows of reality on the walls of their cave but not the actual objects in the outside world. Plato's pessimistic picture of empirical observation caused him to deny the validity of physical models and was responsible for the eclipse of empiricism for 2000 years. Modern empirical science is based on the realization that partial descriptions (shadows) are sufficient for controlling, predicting, and understanding the objects that shadows represent. The representation of physical phenomena by differential equations allows us to control, predict, and even understand the phenomena represented without requiring a more complete description of the phenomena. Similarly, computing systems can be designed, controlled, and understood by interfaces that specify their desired behavior without completely accounting for or describing their inner structure or all possible behavior. Turing machine models of computers correspond to Platonic ideals in focusing on mathematical tractability at the expense of modeling accuracy. To realize logical completeness, they sacrifice the ability to model external interaction and real time. The extension from Turing to interaction machines, and of procedure- oriented to object-based programming, is the computational analog of the liberation of the natural sciences from the Platonic worldview and the development of empirical science. The correspondence of closed systems and philosophical rationalism follows from Descartes' characterization of rationalism by "Cogito ergo sum", which asserts that noninteractive thinking is the basis for existence and knowledge of the world. Interaction corresponds precisely to allowing internal computations (thinking processes) of agents (human or otherwise) to be influenced by observations of an external environment. The correspondence between rationalism and empiricism and algorithmic and interactive computation is thus quite direct. The demonstration that interaction machines have richer behavior than Turing machines implies that empirical models are richer than rationalist models. Fuzzy questions about the relation between rationalism and empiricism can be crisply formulated and settled by expressing them in terms of computational models. The equivalent expressive power of imperative (Turing machine) and declarative (first-order logic) models lends legitimacy to computer science as a robust body of phenomena with many equivalent models, including not only Turing machines but also the predicate and lambda calculus. Church's thesis expresses the prevailing view of 20th century formalists that the intuitive notion of computing is coextensive with the formal notion of functions computable by Turing machines. However, Church's thesis is a rationalist illusion, since Turing machines are closed systems that shut out the world during the process of computation and can very simply be extended to a richer intuitive notion of open, interactive computation that more accurately expresses the interactive behavior of actual computers. Declarative and imperative systems compute results from axioms, arguments or initial inputs by rules of inference, reduction rules, or instructions. The interactive paradigm extends the declarative and imperative paradigms by allowing initial conditions distributed over time. This extension is analogous to extending differential equation techniques from one-point to distributed boundary conditions. In a distributed interactive system we further extend initial conditions to distribution over both space and time. Distribution of initial conditions over space is familiar from multihead Turing machines and does not by itself increase expressive power. However, distribution over time increases expressive power: declarative paradigm: initial conditions (axioms + theorem) + rules of inference -> (yes + proof) or no imperative paradigm: initial value (precondition) and program yields result (postcondition) interactive paradigm: initial conditions distributed over space and time, imperative or declarative rules This analysis suggests classifying paradigms by the distribution of external interactions over space and time. Differences among inner rules of computation have no effect on expressive power, while extension from one-point to distributed interaction over time increases expressive power. Interaction machines derive their power from their ability to harness the interactive power of the environment. Their expressive power is comparable to that of managers who need not have inner ability to solve a problem since they can harness the problem-solving power of their employees. [...] -- Massimo Dentico From m.dentico@teseo.it Sun, 29 Aug 1999 12:39:02 +0200 Date: Sun, 29 Aug 1999 12:39:02 +0200 From: Massimo Dentico m.dentico@teseo.it Subject: Wegner OOPSLA'95, " [...] formal correctness proofs have a limited role as evidence for the correctness of interactive systems." Another citation from the paper of Wegner (emphasis is mine, of course) and, after, some of my questions. ================================================================== OOPSLA Tutorial Notes, October 1995 Tutorial Notes: Models and Paradigms of Interaction Peter Wegner, Brown University, September 1995 (http://www.cs.brown.edu/people/pw/papers/oot1.ps) [...] 3.1 Irreducibility and incompleteness Gödel incompleteness for the integers reflects the incompleteness of many other domains whose sets of true assertions are not recursively enumerable, including that of interaction machines. It strikes a blow against reductionism, and by implication against philosophical rationalism. The irreducibility of semantics to syntax, which is considered a fault from the viewpoint of formalizability, becomes a feature of empirical models in permitting empirical semantics to transcend the limitations of notation. Plato's despairing metaphor that our view of the real world consists of mere reflections of reality on the walls of a cave was turned around by the development of empirical models that predict and control such partially-perceived reflections of reality. Empirical models accept inherent incompleteness and irreducibility, developing methods of prediction, control, and understanding for inherently partial knowledge. Empirical computer science should, like physics, focus on prediction and control in partially specified interactive modeled worlds, since this provides greater modeling power than completely formalizable algorithmic models. In showing that the integers were not formalizable by first-order logic, Gödel showed that Russell and Hilbert's attempts to formalize mathematics could not succeed. These insights also open the door to showing the limitations of formalization for computing. The idea of incompleteness, introduced by Gödel to show that the natural numbers are not reducible to first-order logic, may be used also to show the irreducibility of interaction machines and more generally of empirical systems whose validation depends on interaction with or observation of autonomous environments. Turing machines lose their statues as the natural, intuitive, most powerful computing mechanism but retain their status as the most powerful mechanism having a sound and complete behavior specification. **These limitations of logic imply that FORMAL CORRECTNESS PROOFS HAVE A LIMITED ROLE AS EVIDENCE FOR THE CORRECTNESS OF INTERACTIVE SYSTEMS**. Hobbes' assertion that "reasoning is but reckoning" remains true since logic is a form of computing, but the converse assertion that "reckoning is but reasoning" is false since not all computing can be reduced to logic. **PROOFS OF CORRECTNESS FOR ALL POSSIBLE INTERACTIONS ARE not merely hard but IMPOSSIBLE FOR SOFTWARE SYSTEMS BECAUSE THE SET OF ALL POSSIBLE INTERACTIONS CANNOT BE SPECIFIED. Testing provides the primary evidence for correctness and other evidence like result checking is needed to check that the result actually produced is adequate. Even partial result checking, like parity checking, can greatly increase confidence in a result. Formal incorporation of on-line result checking into critical computing processes reinforces off-line evidence of testing with post- execution evidence that the actual answer is correct. The nature of evidence for the adequacy of computations, the relation among different kinds of evidence, and THE LIMITATIONS OF FORMAL METHODS AND PROOF in providing such evidence ARE IMPORTANT FUNDAMENTAL QUESTIONS.** ================================================================== 1) Are these statements acceptable ("true")? (I don't believe to have an adequate mathematical preparation to be able to object, but the reasoning, according to me, seems to be correct). 2) If the answer is "yes", which will be the role of the proofs of correctness in Tunes? Will be limited to the purely algorithmic part? And which will be the importance of the non- interactive part (algorithms)? (Wegner defends the central role of interective system in computer science). -- Massimo Dentico