From m.dentico@teseo.it Thu, 01 Jun 2000 03:09:06 +0200 Date: Thu, 01 Jun 2000 03:09:06 +0200 From: Massimo Dentico m.dentico@teseo.it Subject: Type checker as staged computations [Fwd: [stack] Digest Number 26] On Concatenative mailing list concatenative@egroups.com - http://www.egroups.com/group/concatenative Massimo Dentico wrote: > > Mark van Gulik wrote: > > [...] > > I'm mostly concerned, however, with the ability to build *restrictions* in the > > language. > > I completely agree with this. In particular, the key concept is > ".. the ability to build *restrictions* in the language". This means > that *you*, the programmer, build the restrictions when you really > need it and it's not the language designer that in arbitrary way > restricts you. > > The ability to restrict a language is crucial to reconcile > extensibility and safety. Plan-P is such an example: > > - http://www.irisa.fr/compose/plan-p/ > > --------------------------------------------------------------------------- > > PLAN-P : a Programming Language for Active Networks and Protocols > > Active networks are aimed at incorporating programmability into the > network to achieve extensibility. An approach to obtaining extensibility > is based on downloading router programs into network nodes. Although > promising, this approach raises several critical issues: expressiveness > to enable programmability at all levels of networking, safety and security > to protect shared resources, and efficiency to maximize usage of bandwidth. > > PLAN-P is a domain-specific language for implementing application-specific > protocols. It allows applications to program network routers. [...] The key > characteritics of PLAN-P are: > > [...] > > Safety and security. Because the language is restricted, many properties > are automatically verifiable. For exemple PLAN-P ensures global termination > and guarantees no packet loss or exponential duplication. > > [...] > > --------------------------------------------------------------------------- [...] > > As for "the language is the type system" in Avail... that's only because I > > got to the implementation of the type system before most of the other things > > in my stacks of notes. Hm, maybe you meant it in the sense of "the type > > declarations of Avail are written in Avail code". I think I was > > interpreting it more like "everything interesting in Avail has to do with > > the type system" when I first read it. > > This is what I meant: "the type declarations of Avail are written in Avail code". > Another way to express it is: "you blur the distinction between expression > languages for values and types". This means that you reduce the number of > different syntactic *and* semantic elements in the language. This is closely > related to staged computations in my mind: > > a type system in a language (statically or dynamically checked) is useful > for restricting run-time values of expressions (and in case variables); > a static typing ensures at compile-time that the program will satisfy these > restrictions; but, in a language where type declarations are written in > the language itself, the only element that differentiates these (type > declarations) and other code is the stage (compile-time vs run-time). > In a language which offers explicit stage annotations (as in Forth with > words like IMMDEDIATE) the distinction between the language and the type > checker is completely blur (the code for the checker is not different > from other IMMEDIATE code). > Of course, such a language needs to be carefully designed. In fact is simple to see problems of circularity: "how to type check the type checker?". Note that this is usually hidden but *always* present: a compiler designer could write (and usually do) the compiler, and so the type checker, in a statically typed language. Comments? Is it relevant for Tunes HLL? -- Massimo Dentico From fare@tunes.org Fri, 2 Jun 2000 03:55:15 +0200 Date: Fri, 2 Jun 2000 03:55:15 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Ken Dickey on a new OS... Dear Ken, sorry for this late reply. > [restating the obvious..] > > I find that converging on and working toward a specific 'target' > goal set clarifies a number of concrete details and > exposes design problems. Starting from a 'taproot' > system which can be generalized has an advantage in growing > communities of users & developers from a working base. Sure. > E.g. pick an under $200 hw computing platform [...] I fear that if we are to be a internet-distributed project, our computing platform will have to be mostly PCs (and maybe PowerMacs), and/or the UNIX C virtual machine. In any case -- yuck. While this is a problem, it might still be effective, since there's a large body of existing code for PCs (OSkit, *BSD, Linux, lots of small stuff, including retro), and I know several people working on OS infrastructure for PC and/or PMac who are likely to open their sources. > and work on the UI (from the top) and the TUNE (from the bottom). Uh, what do you call the TUNE, here? > Start with a dynamic system, develop/adapt a good IDE => > fast mutate/learn/update cycle [make mistakes faster and cheaper > than anyone else]. The substrate should be stable enough early on > to allow rapid learning and having a coherent target means that the > momentum converges rather than going 'brownian'/random. That's how we see things indeed. >> The BIG problem is to build a usable initial core. > > I guess the questions I have are: "How does what you want differ from > the closest already available approximation?" and "How can you make use > of the user community which supports this?". I fear that what we want differs a lot from the closest already available approximation, and that there is no user community in which to tap directly. Indeed, one of the major features we want is orthogonal persistence: reify,save,load,error-check,intern data (or worse, code) anymore; the system should manage that by default (leaving the possibility to handle it manually). Such thing doesn't exist in the mainstream computing world; it kind of exists in PDAs and handheld calculators; it has been implemented on mainstream hardware (Eumel, PJama, PLOB!); but in closed ways that make it unsuitable both to talk to existing services and to extend the system for customized services. Orthogonal persistence is a pervasive feature that requires synergy with the rest of the system; without the ambition to take over the whole system and implement all its services, it is a vain feature. But even then, orthogonal persistence isn't the only feature we're interested in, or we'd just take Texas or RScheme and be happy. The key feature we want is dynamic reflection, the ability of the system to dynamically inspect and reify its state so as to reason on it, to analyse or instrument code, to migrate code, make it persist, etc; more generally, so as to dynamically extend the system in ways not necessarily designed in advance by the system implementers. Coming with reflection is some kind of tight system integration, where all system components can easily discuss with each other inside a same object system without having ever to go through ad-hoc parsers and unparsers, configuration files, command-line protocols, etc, because you can just use the builtin system services, variable inspection, etc. Maybe the nearest thing ever implemented to do that were LISP machines; I can't say, I don't have a LispM (yet -- am on the verge of buying one). Another greatly hackable computer was the HP28/HP48/HP49 series of Reverse-Polish-Lisp-based handheld calculators. (the latter had orthogonal persistence, too!) Certainly Squeak also qualifies as a near target. But with all of these, the user still depended on system-inaccessible software: the LISP machines depended on their $$$$ specialized hardware; the HP calculators depended on their non-hackable ROM; even Squeak depends on a C runtime that isn't Squeak-hackable, although you can mostly save the Squeak image and restart with a new runtime. This matters in as much as the user isn't able to fully manage the evolution of the system from within the system; he cannot build e.g. total quality management as automatic internal tools, or whole-system reasoning, or system-managed migration to a new underlying hardware/software run-time platform. Also, I'm not satisfied with the current implemented models of reflection as an ad-hoc feature, instead of an instanciation of a more general ability to serve as universal metasystem for arbitrary other computerized systems, with respect to development, execution, manipulation, reasoning, etc. This is of course particularly important for system evolution, where the future system is not exactly the same as the current one, and the ability to meta-control one is not the same as the ability to meta-control the other, so that an ad-hoc reflective loop can but fail to provide both these features at once when needed. All this to say, I'm not sure what "nearest" to what we want to do means, or if it is meaningful at all. It looks like to me we're striving towards some kind of infrastructure that just doesn't exist yet. But maybe I'm just deluding myself. > [Obviously, I do not yet have a concrete mental model > of what you are proposing]. I'm not sure most of us have a concrete enough model either. There are many details that escape me. > I have been part of a number of development efforts/communities > (e.g. in Scheme/Smalltalk/Dylan/...) which have been taking > various fundamental approaches for a couple of decades now, > which is why I am open to "radical rethink". > However, I was trained as an engineer before getting into CS > and I tend to reach for existing solutions, > particularly those which have done significant work, > have a research/developer community (injecting new ideas) > as well as a user community (beating the ideas into shape > and throwing out the ones that don't work). > Again, my questions are "what am I trying to achieve?" > and "how do I get there with the lease work/resource?". > If I can leverage, what specific missing fundamentals are required > to get ahead? What problems are there that need to be eliminated? > [Do I really need to build from scratch? > It is fun, but it is also a lot of work. > What is the requirement which drives this "ground up" approach?]. > It _does_ take a long time to build a dog from amino acids.. On the other hand, providing emulation for existing systems or translation from them has a constant (albeit large) cost (i.e. doesn't increase with the number of ported applications). So that, considering the large base of free software, we are not starting from scratch. >> Hum. Would you be available in one year from now, when I find funding? > Probably--if I am not already working on such a solution > (perhaps in another context). I'll contact you. If you start something, I'm interested in hearing about it, too. > I have looked through various docs (arrow, etc.) I fear the arrow paper is not the right thing to read for concrete stuff. Actually, there is currently no coherent documentation about our concrete goals, only lots of information scattered around the various subprojects and the mailing-list archive. The only documents that are currently maintained are the FAQ and the Glossary. The FAQ includes a list of features with a concrete meaning that we want TUNES to have that distinguish it from other systems: a modular concurrent programming model based on a safe high-level language (not unsafe C processes separated by a paranoid kernel); orthogonal persistence (not explicit low-level file management), software-based safety (hardware protection being only the last resort), dynamic reflection (no state-lossy reboot ever needed for either process or whole machine), etc. > but my experience is that the more abstract things are, > the more concrete the examples must be to illustrate what is going on. > I tend to learn well from examples. > Can you point me to more specific examples/docs > which illustrate the higher-level reflective capabilities > you are referring to? [I am familiar with computational > reflection/reification and somewhat with "machine learning" technologies > but less familiar with specific AI ontologies which are computationally > tractable/efficient with small resource consumption. > I'm a bit out of date w.r.t. the ai research literature.]. Unhappily, not. The Interfaces/ and Migration/ subprojects show ramblings about simple intended uses of reflection. More ramblings are scattered in web-archived mailing-lists. Note that we do not directly aim at complex AI technology, at least not at first; our first aim is the whole-system-reflection infrastructure that an AI can later put to good use. >> The ability for the user to dynamically define or select new >> low-level implementation strategies is thus essential to achieve >> a universal system, one that can _express_ solutions to all problems. > > It only needs to express problems that most people are interested in. ;^) > I disagree: if it is not a UNIVERSAL system, able to express all problems, then any success of it may only further the advent of such universal system, not make it nearer. Indeed whatever feature you make unexpressible in your system, sometimes someone somewhere will make a discovery that might be useful to everyone at large, but depends on that unexpressible feature so as to work reliably, or at all, without rewriting everything from scratch. For instance, consider process persistence or migration; if your programming language doesn't support it, it's hell to do it manually; alternatively, on some hardware, you could hack your operating system to do it transparently, except that it still won't be reliable in presence of file reopening failure, since your language has no way to catch and handle such events. Another feature you may consider is capability-based security; if your system can't express capabilities, if they have to be implemented manually using user-level mechanisms, then your security mechanisms is purely advisory, and the first-come non-compliant or buggy program can break it. Note that the ability to dynamically retro-instrument running code in a way coherent with modifications to the original high-level source code (i.e. dynamic compile-time reflection) DOES allow to express all such features; I believe it does provide for a universal system. But it's kind of like the assembly-level of universal systems, upon which you have to build useful abstractions. > Again, I am most interested in meaningful, useful solutions > for ordinary people. So am I; but I'm convinced that better software infrastructure is instrumental in enabling end-user solutions that are currently out-of-reach. [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] If debugging is the process of removing bugs, then programming must be the process of putting them in. -- Dijkstra From kend0@earthlink.net Thu, 01 Jun 2000 23:01:48 -0700 Date: Thu, 01 Jun 2000 23:01:48 -0700 From: Kenneth Dickey kend0@earthlink.net Subject: Ken Dickey on a new OS... Francois-Rene Rideau wrote: > > E.g. pick an under $200 hw computing platform [...] > I fear that if we are to be a internet-distributed project, > our computing platform will have to be mostly PCs (and maybe PowerMacs), > and/or the UNIX C virtual machine. In any case -- yuck. Having standard, inexpensive hardware could help a distributed project converge. If we develop a custom OS, it delays having to pay the communication, porting and testing costs. > > and work on the UI (from the top) and the TUNE (from the bottom). > Uh, what do you call the TUNE, here? I am hoping to find out from you. > > I guess the questions I have are: "How does what you want differ from > > the closest already available approximation?" and "How can you make use > > of the user community which supports this?". > > I fear that what we want differs a lot from the closest already available > approximation, and that there is no user community in which to tap directly. > Indeed, one of the major features we want is orthogonal persistence: No problem. I find this a very good reason for doing something differently. ... > Certainly Squeak also qualifies as a near target. ... > even Squeak depends on a C runtime that isn't Squeak-hackable, Actually it is a Smalltalk runtime which cross compiles to C. [E.g. the GC is written in Smalltalk]. You probably are also aware of Scheme48 which has a runtime written in PreScheme and cross compiled to C. Scheme48 has a number of reflective features--or near enough with a little hacking. > > [Obviously, I do not yet have a concrete mental model > > of what you are proposing]. ... > > I have looked through various docs (arrow, etc.) > I fear the arrow paper is not the right thing to read for concrete stuff. > Actually, there is currently no coherent documentation about our concrete > goals, only lots of information scattered around the various subprojects > and the mailing-list archive. I'll keep reading. Later, -KenD From water@tscnet.com Sat, 03 Jun 2000 11:23:27 -0700 Date: Sat, 03 Jun 2000 11:23:27 -0700 From: Brian Rice water@tscnet.com Subject: Diktuon, the Tunes Web Database, and Slate (soon to be) News Hello again, all This is *not* ready for production work just yet, but I fealt that the amount of work being done on this long-belated Tunes administrative project deserves some notification. At http://diktuon.arrow.cx (Yes, that's *my* domain name), you'll find the entry to a new architecture for Tunes documentation development. You can't edit it without password authorization (we hope :), but it doesn't support namespaces (at the immediate moment). At any rate, there's an overview of the syntax for the nodes to be entered and how inter-node links are specified. The whole idea is to have a strongly modular documentation web. My current findings on Slate and Arrow will go there, and I'm doing all of my enhancements to the original Tunes structure there, incrementally. Slate is very close to a final specification, but I'm working pretty continuously on various things at once, including the DB. Tunes members are more than welcome to contact Corey, the DB administrator for the moment, and ask for access to manipulate nodes. I'm not leaving his email address here. I suggest you use the #tunes IRC channel to contact him and discuss it there. Of course you should study the existing structure and way we're doing business before you add to it, so that we know what changes have to be made when they must be made. (And yes, several system-wide changes are planned.) The entire OS, Language, and Reflection review sections have yet to be started, so it's a perfectly good time to begin there. Thanks, ~ "Every day, computers are making people easier to use." From dem@tunes.org Sat, 3 Jun 2000 14:21:58 -0700 (PDT) Date: Sat, 3 Jun 2000 14:21:58 -0700 (PDT) From: Tril dem@tunes.org Subject: Diktuon, the Tunes Web Database, and Slate (soon to be) News On Sat, 3 Jun 2000, Brian Rice wrote: > [...] there's an overview of > the syntax for the nodes to be entered and how inter-node links are > specified. The whole idea is to have a strongly modular documentation web. Good idea. Wiki was lacking authentication, and I didn't like the naming convention of NodeName (two capitals requierd). Such a thing could be done in Zope, but it is probably just as easy to write one from scratch. > [...] > The entire OS, Language, and Reflection review sections have yet to be > started, so it's a perfectly good time to begin there. I'm working on the OS, Language and VM in Zope and Postgresql again, and making progress. However, I haven't designed any such web of documents to handle the entire tunes web, so this is welcome. Let's leave it in the air which system is to be used for the Review since they are both not done yet. -- David Manifold http://bespin.dhs.org/~dem/ This message is placed in the public domain. From alaric@alaric-williams.com Sun, 4 Jun 2000 01:09:45 +0100 (BST) Date: Sun, 4 Jun 2000 01:09:45 +0100 (BST) From: alaric@alaric-williams.com alaric@alaric-williams.com Subject: Diktuon, the Tunes Web Database, and Slate (soon to be) News On Sat, 3 Jun 2000, Brian Rice wrote: > At http://diktuon.arrow.cx (Yes, that's *my* domain name), you'll find the I have argon.cx, although I've not yet moved stuff over to there :-) ABW -- http://RF.Cx/ http://www.alaric-williams.com/ http://www.warhead.org.uk/ alaric@alaric-williams.com ph3@r mI sk1llz l3st I 0wn j00 From jbowers@perspex.com Mon, 05 Jun 2000 20:44:58 +0000 Date: Mon, 05 Jun 2000 20:44:58 +0000 From: Joseph Bowers jbowers@perspex.com Subject: Machine-Code Reflection (RFC) This is just a request for comment/maybe something I've missed in the review project... Someone recently brought up a problem with Squeak on this mailing list- the squeak core is compiled from Smalltalk to C, and then from C to machine code, and isn't really accessable from the squeak environment anymore- I've never messed with squeak (or written a compiler) but this seems to speak to a pretty big general problem- something of some kind, a VM/Kernel or whatever (even a Tunes/LLL interpreter) is going to be written in Assembler, or compiled, or somehow end up represented as a bitstring interpretable by some chip. This seems to want to be pretty opaque. How might reflexive machine code be handled? There is going to have to be a little bit present, and it seems kind of sad to have some involate core at the center of the system. I suppose something like some code... JMP past_comment metadata here past_comment: more code is reasonable, as is accompanying all code segments with descriptor segments or something, but I haven't a clue as to the details of how this has been done in the past, or how it might be done, or how to get out of having to do it without sacraficing some important capabilities... How has this been approached? Joe From btanksley@hifn.com Mon, 5 Jun 2000 14:14:57 -0700 Date: Mon, 5 Jun 2000 14:14:57 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Machine-Code Reflection (RFC) From: Joseph Bowers [mailto:jbowers@perspex.com] >How might reflexive machine code be handled? There is going >to have to be a little bit present, and it seems kind of sad >to have some involate core at the center of the system. I suppose >something like One common way, and my favorite, is to compile to some kind of bytecode or wordcode, and use a dynamic optimiser to convert that to machine-code as needed. When code modification happens, it happens to the bytecode, and when it's over the machine code can be deleted (or more accurately, the bytecode can be tagged as unoptimised, so the next time it's executed it'll be recompiled). >Joe -Billy From thomas.mahler@itellium.com Tue, 06 Jun 2000 09:47:17 +0200 Date: Tue, 06 Jun 2000 09:47:17 +0200 From: Mahler Thomas thomas.mahler@itellium.com Subject: Machine-Code Reflection (RFC) Dies ist eine mehrteilige Nachricht im MIME-Format. --------------BE6681E853A17F5C7C37BF82 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi all, There has been an interesting approach in the Symbolics Lisp machines: All High-level code was compiled to a certain microcode. This microcode was interpreted by the "real" hardware, but compilers etc. only had to know the microcode instruction set. The microcode was loaded at boottime and was freely adjustable. It was a common practice to have special microcode for Lisp- and a different microcode for Prolog-based Applications. I don't know if there were any code-morphing techniques (i.e. runtime optimization or JIT compilation) used. I also don't remember if the microcode could be (reflectively) modified at runtime. regards, Thomas --------------BE6681E853A17F5C7C37BF82 Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf" Content-Transfer-Encoding: 7bit Content-Description: Visitenkarte für Thomas Mahler Content-Disposition: attachment; filename="vcard.vcf" begin: vcard fn: Thomas Mahler n: Mahler;Thomas org: Itellium Systems & Services GmbH adr: Theodor-Althoff-Str. 2;;;Essen;NRW;45133;Germany email;internet: thomas.mahler@itellium.com tel;work: +49-201-727-6301 tel;fax: +49-201-727-4952 x-mozilla-cpt: ;0 x-mozilla-html: TRUE version: 2.1 end: vcard --------------BE6681E853A17F5C7C37BF82-- From kyle@arcavia.com Tue, 06 Jun 2000 06:05:34 -0400 Date: Tue, 06 Jun 2000 06:05:34 -0400 From: Kyle Lahnakoski kyle@arcavia.com Subject: Machine-Code Reflection (RFC) Joseph Bowers wrote: > How might reflexive machine code be handled? There is going > to have to be a little bit present, and it seems kind of sad > to have some involate core at the center of the system. I suppose > something like > > some code... > JMP past_comment > metadata here > past_comment: more code > > is reasonable, as is accompanying all code segments with descriptor > segments or something, but I haven't a clue as to the details of > how this has been done in the past, or how it might be done, or > how to get out of having to do it without sacraficing some important > capabilities... I consider executable code to be a particular view of the function. The code exists as a block of bytes. This block of bytes is logged in an entry somewhere that associates it with the other attributes of the function. This log entry could also store a second (readable) copy for inspection. Therefore it would not matter of the block of code is directly interpretable or not; it can be a result of a one-way function. My project stores everything in tables, here is a simplified version of what I am doing: Function_Type ID //unique ID for this function Start_StateID //points to a state machine describing function Return_TypeID //what this function returns CompiledFunction ID //one-to-one match with ID of Function_Type CompiledCode //Block of compiled code. The CompiledFunction table has a matching record ID for any function that has been compiled. -- ---------------------------------------------------------------------- Kyle Lahnakoski Arcavia Software Ltd. (416) 892-7784 http://www.arcavia.com From m.dentico@galactica.it Thu, 08 Jun 2000 06:13:39 +0200 Date: Thu, 08 Jun 2000 06:13:39 +0200 From: Massimo Dentico m.dentico@galactica.it Subject: Metaprogramming & language design [Was Re: [stack] Digest Number 28] Mark van Gulik on Concatenative mailing list wrote: > > From: Massimo Dentico > > Sorry for the delay Mark, but I'm busy in these days and I need much time > > to write in English. Some subjects are already addressed by Bill, I agree > > with him thus I do not intend to be redundant. > > > > Mark van Gulik wrote: > > > > > > Massimo Dentico wrote: > > > > Mark van Gulik wrote: > > > > > I'm positive that Forth is applicable over an enormous range of domains, > > > > > mostly due to its core simplicity and extreme extensibility. I'm mostly > > > > > concerned, however, with the ability to build *restrictions* in the > > > > > language. > > > > > > > > I completely agree with this. In particular, the key concept is > > > > ".. the ability to build *restrictions* in the language". This means > > > > that *you*, the programmer, build the restrictions when you really > > > > need it and it's not the language designer that in arbitrary way > > > > restricts you. > > > > > > I almost agree. The language designer also has the responsibility of > > > building the right restrictions *into* the language, such that it will have > > > clean semantics and high reliability (the two tend to go together). For > > > example, in another thread I argued that conservative garbage collection > > > (CGC) in Forth is really CGC in some proper subset of Forth. That's because > > > Forth's semantics are too strongly defined in terms of machine > > > representation to allow CGC (in the full language). A redesign of Forth is > > > possible to accomodate CGC for the full language, but it wouldn't look much > > > like Forth afterwards. Basically issues like GC, type systems, > > > optimization, and design-by-contract must be *designed into* a language, not > > > added later (typically by subtractive synthesis as with CGC and retrofitted > > > type systems). > > > > I disagree here. The advantage of a meta-programming (constructing > > programs that inspects/manipulates other programs) tool with reflective > > capabilities (ability to inspect/modify itself even if, in Forth, is fairly > > ad-hoc) like Forth is exactly this: the tool have the ability to absorb and > > arrange disparate syntaxes/semantics. > > Suppose you want to find all calls to some routine. An integrated > development environment like Smalltalk allows you to do this search via > metaprogramming. I agree that this is a good thing. On the other hand, the > compiler in Smalltalk can be modified (each class can use a different > Compiler class to compile the class's methods). That is probably a bad > thing, because it means the code that searches through the library for all > calls to some routine might not work any more. Or maybe the browsers stop > working, or exception handling breaks because of assumptions about the > compiler, etc. If you define a clean, generic protocol for the Compiler > class you can avoid some of these problems, but this is just language > design! > > I'm in favor of lots and lots of metaprogramming. I'm *not* in favor of > leaving the system so open to fundamental changes that you can no longer > trust a single line of code to do what you expect it to. Some kinds of > metaprogramming are this severe. Recapitulation: we agree about metaprogramming as a good thing; we agree about the necessity to express restrictions in metaprograms, better if into the language itself; we disagree about built-in restrictions. > Consider optimization. If a wide-open metaprogramming system allows the > compiler to be augmented with new code to perform optimizations, this is > probably a Bad Thing. There is a system that do this, Pliant. The author differentiates optimizations done only at the intermediate level (machine indipendent) and machine specific optimization: Pliant - release XX (current is 39) - http://pliant.cx/ in particular "The Pliant language specifications" [..] "Meta programming: the great step": - http://pliant.cx/pliant/language/ > Without strict rules or strongly-worded guidelines about what invariants > an optimization must maintain, someone might add an optimization that > breaks the ability to do symbolic debugging, or breaks the ability > to search for all calls to a routine. It would be much better if the > rules were made very explicit. Exactly, *very explicit*, not implicit in the language. > In that sense, the optimization framework would deal with the metaprogramming > stuff, and present a safe, simplified interface for user-supplied optimization > rules to use. > > Treating most of the metaprogramming facilities as part of a language rather > than merely a library, I believe the language designer must be *very* > careful about what's available in the metaprogramming interface. What I propose is to put metaprogramming facilities *and* restrictions into libraries, reconciling flexibility, extensibility with safety. This is not a problem if you have the ability to build restrictions in the language itself, you don't need another notation. What about if you build into the language some restrictions that you consider absolute necessary to ensure a certain safety level and in the future someone discovers a better way to ensure the same safety level but with less restriction to expressiveness? (This could be related to the Turing-equivalence of such powerful type system, thus undecidability creeps in ... but I'm not qualified to explore this subject: volunteers are more than welcomes). > Water is very, very flexible, but it's only when you remove most of this > flexibility by freezing it that you can build useful things from it. In your metaphor: no, it's sufficient (and often more useful) to build conduits (restrictions). You can even transform water in vapor, removing some restrictions and enforcing other restrictions. > > For example, redefining the semantics of some words (creating new contexts > > for old programs) probably it's possible to accommodate GC better than in > > traditional languages (without GC); it's even possible to imagine a metaprogram > > that takes as input a Forth program with direct memory management and gives as > > output a Forth program integrated with GC. Some research on static (compile- > > time) automatic memory management exists also. > > The safest way to support GC (by far) is to *design* it into the language at > the beginning. It's almost impossible to add it safely to a language, but > it's almost trivial to include it in a new language. The safest way to support GC, in my opinion, is to *demonstrate* that the algorithm is safe. With metaprogramming you could combine a specific program with a specific GC algorithm (better for this program, perhaps obtained from a more general GC via some specialization technique). Given the context (the program), it could be more simple to demonstrate GC algorithm safety against this program rather than an unknown context (every possible programs). The idea of using different GC algorithms at the same time for different program is certainly not new. However, I'm not an expert about GC and formal treatment of programs, thus I'm unable to establish the possible difficulties in this procedure. I admit that, with your strategy, metaprogramming is more approachable by the "average" programmer, but this doesn't mean that the situation will never change (after all, freedom implies responsibility). > > However, I consider Forth more a conceptual model than a language in itself: > > it's defined more by "concepts" like words, staks, dictionaries, text input > > buffer than by a particular set of primitives. > > Ok, but then I'll just change what I say should have metaprogramming > restrictions: "Forth + its primitive words would benefit from having > restrictions of the form...". I could speak about the CMoF (Conteptual Model of Forth) .... :) About Forth primitives, they are not necessary if you give a meta-circular definition of the language, choose carefully a subset of words necessary to define it, the language kernel (usually those words with a mapping 1:1, or near, to machine instructions) and then define the rest of the language in term of this subset (it's not so difficult if you write, as usual in Forth, mostly functional words, without side effects). In this way the kernel can varies and you can adapt it to different hardware (not a fix set of primitives) but the language (or the system) its always the same. Probably you know all that: if I remember well, this is common practice in the Forth community. This "equational" treatment is even more evident in Joy, where side effects are absent and definitions uses the symbol "==". -- Massimo Dentico From water@tscnet.com Sat, 10 Jun 2000 13:08:19 -0700 Date: Sat, 10 Jun 2000 13:08:19 -0700 From: Brian Rice water@tscnet.com Subject: Slate has a concatenative syntax NOTE: this has been cross-posted to 3 mailing lists. Keep that in mind if you reply. Some of you fellows are from the Tunes list, so you have heard a little of this language. The language is pretty basic... objects are namespaces that sit in a graph, with the (unidrectional) connections representing either slots in the objects or names within namespaces, depending on your perspective. Slate expressions are just concatenations of lookup directives (i.e. name the next object in the chain and you're there). So there's a namespace stack to follow evaluation of expressions. There's exactly one special syntax operator, and it pops the namespaces off of the stack and causes evaluation. For our first round at creating an environment, the system will be much like a hierarchy, with "<" slots forming the basis for obtaining other objects. Note that this doesn't pop namespaces off the stack, it just causes the expression's path to double-back. Executable code is possible because the object model supports inheritance of behavior and a data-flow model between slots. The initial system is sticking with ":" slots for assignment and "^" (read "result"). There is also a very powerful meta-object system being worked out, but this will also take time. There's an old (and broken) tutorial at http://www.tunes.org/~water/slate-tutorial.html, but at least the syntax of the example expressions has been properly updated. (Slate was applicative in syntax for quite a while.) I'm rebuilding the tutorial at http://diktuon.arrow.cx/list.php?ns=tutorial. Too much is changing too quickly for me to keep all my documentation properly consistent, so please bear with me. This is also why I have been so secretive about the development process, since it would wind up confusing the vast majority of people without good reason. Absolutely everything is at the level where the user can work with. Even the "." operator will be available for modification once the compiler is ported to Slate in terms of meta-objects and such. Side Note: I promised language definition at least a week ago, and it was ready to go at that point, but I have had my hands full with my job and with getting the documentation system ready. The Slate language old documentation is at http://www.tunes.org/~water/slate-home.html but this will soon be replaced by the contents at http://diktuon.arrow.cx. There's a lot of new content there, and some re-written and revised Tunes work. The linking system is not online just yet, though, so it's not simple to navigate. The system will be re-usable for many projects. Anyway, I'm in the process of opening up this project, since we are reaching some very definite and practical specifications, instead of the general guidelines we had before. Thanks, ~ From dem@tunes.org Sat, 10 Jun 2000 23:11:44 -0700 (PDT) Date: Sat, 10 Jun 2000 23:11:44 -0700 (PDT) From: Tril dem@tunes.org Subject: Preliminary Review database Right now I have a minimally functional Review database. You can browse it, and I can create user accounts which can only add new data. There is hardly any data in it yet, but if you want to contribute seriously, send me an email and I can set you up an account to edit it through the web. http://zope.tunes.org/ - Go here Frequently asked questions: It is using Zope (www.zope.org) and PostgreSQL (www.postgresql.org). HTML links are kept together separately from any review text. This is so they can be automatically checked, and so that any reviewer can discuss any of the links. Other stuff like the main news page, Glossary, members list, and rest of the site might be zope-ified later. I am working closely with water and coreyr to coordinate what is happening in that department. -- David Manifold http://bespin.dhs.org/~dem/ This message is placed in the public domain. From m.dentico@galactica.it Mon, 12 Jun 2000 18:57:55 +0200 Date: Mon, 12 Jun 2000 18:57:55 +0200 From: Massimo Dentico m.dentico@galactica.it Subject: Data mining for Tunes doc? (was: Preliminary Review database) David Manifold wrote: > > Right now I have a minimally functional Review database. You can browse > it, and I can create user accounts which can only add new data. There is > hardly any data in it yet, but if you want to contribute seriously, send > me an email and I can set you up an account to edit it through the web. > [...] > I am working closely with water and coreyr to coordinate what is happening > in that department. Brian Rice wrote: > [...] > Tunes members are more than welcome to contact Corey, the DB administrator > for the moment, and ask for access to manipulate nodes. I'm not leaving his > email address here. I suggest you use the #tunes IRC channel to contact him > and discuss it there. > [...] Dear David and Brian, I appreciate very much your (and of other) effort to reorganize and improve the Tunes project documentation. Feel free to create and communicate me an account, via e-mail, possibly suggesting the guidelines for the new documents structure (sorry Brian, I'm afraid that it's not practical for me to chat in English: I haven't real time performances ... well, even my batch performances are not better :-). I think, in this phase, I can easily help in the migration of the precedent documentation. Only a little of perplexity: with this entire restructuring we break every external link. Is absolutely impossible to keep a little bit of compatibility with the static old structure? For example, transforming the old pages to indexes to new information, with links at every anchor instead of information? I want propose also an idea to your attention: the big problem with this textual unstructered information is to create and maintain a useful classifications and cross-links. With traditional DB techniques this require a massive human intervention that is boring and time consuming. I think it's possible to use statistical methods and other machine learning methods to overcome (at least partially) this problems. The field seems quite well developed, with commercial applications already available. In fact there is at least one start up which has grown greatly in the last years with these methods: Autonomy. However, I don't know how is difficult to implement these techniques and if it's practical to explore the subject in this moment: it is only a suggestion. Some references: CMU World Wide Knowledge Base (Web->KB) project - http://www.cs.cmu.edu/~webkb/ Bow: A Toolkit for Statistical Language Modeling, Text Retrieval, Classification and Clustering - http://www.cs.cmu.edu/~mccallum/bow/ Naive Bayes algorithm for learning to classify text - http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/naive-bayes.html Central Inductive Agency. - http://www.csse.monash.edu.au/~lloyd/tildeMML/Intro/index.html Data Mining from spam - http://www.hpl.hp.com/personal/Tom_Fawcett/mining-spam/index.html Data Mining and CRM (Kurt Thearling) - http://www3.shore.net/~kht/index.htm About Autonomy: Wired 8.02: The Quest for Meaning - http://www.wired.com/wired/archive/8.02/autonomy.html Michael Lynch CEO Autonomy - http://industrystandard.net/people/display/0,1157,1889,00.html Autonomy - Knowledge Management and New Media Content Solutions - http://www.autonomy.com/ This product is free for personal use, but unfortunately only for Windows: Autonomy Kenjin - http://www.kenjin.com/ Best regards. -- Massimo Dentico From ignaz@navy.org Sun, 11 Jun 2000 13:08:03 +0100 Date: Sun, 11 Jun 2000 13:08:03 +0100 From: Ignaz Kellerer ignaz@navy.org Subject: Preliminary Review database Am 11-Jun-00 schrieb Tril: > http://zope.tunes.org/ - Go here > > Frequently asked questions: > It is using Zope (www.zope.org) and PostgreSQL (www.postgresql.org). Zope does have several big disadvantages: - it cannot be used offline efficiently. Staying online while editing anything is EXPENSIVE in most countries. - it does not support CSCW concepts in a reasonable way. Whenever two or more people are trying to edit a text at the same time, Zope discards the changes of anyone but one. - it is far from errorproof. Whenever a connection gets broken when trying to send the contents back, the missing part of the contents is clipped and cannot be recovered by itself. (it happened to me) I suggest we use CVS instead of Zope. CVS can be used offline; work is not lost even when more than one people are working on the same document at the same time; CVS is errorproof; and things can be automated with CVS, on server-side as well as on client-side. -- Ignaz Kellerer // _Ignaz@navy.org___/ http://www.navy.org/ \__ irc: Acrimon \X/ Amiga is alive! \ /home/Ignaz@navy.org / From water@tscnet.com Tue, 13 Jun 2000 23:28:13 -0700 Date: Tue, 13 Jun 2000 23:28:13 -0700 From: Brian Rice water@tscnet.com Subject: Linear namespaces, monads, and Slate To Tunes members and Slate listeners all, I searched the mailing lists and couldn't find any clear references to this paper, so I thought I should mention it here. This paper in very extensive form outlines most of the potential benefits of linear naming (related closely to the notions of linear typing and linear logic). This is extremely close to the Slate philosophy of how namespaces are available, but I'd also like to be able to keep such an issue in Slate's MOP as much as possible. At any rate, the only differences between Slate and the notions of this paper are extremely trivial (only syntax and implementation-related), so this doesn't mean Yet Another shift in the Slate paradigm. Instead, it should serve as an effective tool for formalizing the Slate language, particularly the evaluation model that is currently being resolved to a final extent. The results of this should also extend into the meta-object system library. Anyway, I'm just as impatient as you all to see this language get out of the research stage and into implementation and testing and development, so please be patient because I'm making sure that everyone is getting what they need out of this (particularly Tunes HLL). Thanks for the interest everyone, ~ From water@tscnet.com Tue, 13 Jun 2000 23:30:39 -0700 Date: Tue, 13 Jun 2000 23:30:39 -0700 From: Brian Rice water@tscnet.com Subject: Linear namespaces, monads, and Slate At 11:28 PM 6/13/00 -0700, Brian Rice wrote: >To Tunes members and Slate listeners all, > >I searched the mailing lists and couldn't find any clear references to >this paper, so I thought I should mention it here. > >This paper in very extensive form outlines most of the potential benefits >of linear naming (related closely to the notions of linear typing and >linear logic). This is extremely close to the Slate philosophy of how >namespaces are available, but I'd also like to be able to keep such an >issue in Slate's MOP as much as possible. At any rate, the only >differences between Slate and the notions of this paper are extremely >trivial (only syntax and implementation-related), so this doesn't mean Yet >Another shift in the Slate paradigm. Instead, it should serve as an >effective tool for formalizing the Slate language, particularly the >evaluation model that is currently being resolved to a final extent. The >results of this should also extend into the meta-object system library. > >Anyway, I'm just as impatient as you all to see this language get out of >the research stage and into implementation and testing and development, so >please be patient because I'm making sure that everyone is getting what >they need out of this (particularly Tunes HLL). > >Thanks for the interest everyone, >~ Oops I forgot the URL: ftp://publications.ai.mit.edu/ai-publications/1500-1999/AITR-1627.ps That's 156 pages, not just light reading, although anyone who can read SEXP will find it very readible. Thanks again, ~ From youlian@intelligenesis.net Wed, 14 Jun 2000 03:07:20 -0400 Date: Wed, 14 Jun 2000 03:07:20 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: Linear namespaces, monads, and Slate Yep, awesome reading. After I have read it I was so impressed by Alan's ideas about state, that I even acquired (but have not read yet, because my time management skills grossly suck) Mike Dixon's phd thesis, mentioned in the paper. That paper was easy. I wish all the theory backlog I have to read is like that. Unfortunately, the link below doesn't work and I don't remember from where I have downloaded it. Keep up the good work, Water. You r da man. Youlian -----Original Message----- From: owner-tunes@bespin.dhs.org [mailto:owner-tunes@bespin.dhs.org]On Behalf Of Brian Rice Sent: Wednesday, June 14, 2000 2:31 AM To: slate@tunes.org; tunes@tunes.org; review@tunes.org Subject: Re: Linear namespaces, monads, and Slate At 11:28 PM 6/13/00 -0700, Brian Rice wrote: >To Tunes members and Slate listeners all, > >I searched the mailing lists and couldn't find any clear references to >this paper, so I thought I should mention it here. > >This paper in very extensive form outlines most of the potential benefits >of linear naming (related closely to the notions of linear typing and >linear logic). This is extremely close to the Slate philosophy of how >namespaces are available, but I'd also like to be able to keep such an >issue in Slate's MOP as much as possible. At any rate, the only >differences between Slate and the notions of this paper are extremely >trivial (only syntax and implementation-related), so this doesn't mean Yet >Another shift in the Slate paradigm. Instead, it should serve as an >effective tool for formalizing the Slate language, particularly the >evaluation model that is currently being resolved to a final extent. The >results of this should also extend into the meta-object system library. > >Anyway, I'm just as impatient as you all to see this language get out of >the research stage and into implementation and testing and development, so >please be patient because I'm making sure that everyone is getting what >they need out of this (particularly Tunes HLL). > >Thanks for the interest everyone, >~ Oops I forgot the URL: ftp://publications.ai.mit.edu/ai-publications/1500-1999/AITR-1627.ps That's 156 pages, not just light reading, although anyone who can read SEXP will find it very readible. Thanks again, ~ From m.dentico@galactica.it Wed, 14 Jun 2000 15:58:25 +0200 Date: Wed, 14 Jun 2000 15:58:25 +0200 From: Massimo Dentico m.dentico@galactica.it Subject: Linear namespaces, monads, and Slate Youlian Troyanov wrote: > [...] > Unfortunately, the link below doesn't work and I don't remember from where > I have downloaded it. > [...] It works for me. -- Massimo Dentico From dem@tunes.org Wed, 14 Jun 2000 14:04:56 -0700 (PDT) Date: Wed, 14 Jun 2000 14:04:56 -0700 (PDT) From: Tril dem@tunes.org Subject: Preliminary Review database On Sun, 11 Jun 2000, Ignaz Kellerer wrote: > Am 11-Jun-00 schrieb Tril: > > http://zope.tunes.org/ - Go here > > > > Frequently asked questions: > > It is using Zope (www.zope.org) and PostgreSQL (www.postgresql.org). > > Zope does have several big disadvantages: > > - it cannot be used offline efficiently. > Staying online while editing anything is EXPENSIVE in most > countries. While this is certainly true, it is not beyond comprehension that we could set up some kind of alternate input system where you could batch-enter multiple reviews at one time. The database is stored entirely in SQL, so if you had some way to output SQL (either if you wrote it manually, or used some script) you could upload a text file that would commit your changes all at once. Even without that, I think we could live with volunteer editors who can work on-line. > - it does not support CSCW concepts in a reasonable way. > Whenever two or more people are trying to edit a text at the same > time, Zope discards the changes of anyone but one. This should not be a problem. The main "text" being edited is the reviews. But I have made those personal, so you can only edit your own reviews. The rest is URL's. I don't see much contention for correcting the same URL at the same time by different people. > - it is far from errorproof. > Whenever a connection gets broken when trying to send the contents > back, the missing part of the contents is clipped and cannot be > recovered by itself. (it happened to me) You mean if your browser is sending some changes to Zope, and it gets cut off? I can believe that would happen... although I'm not sure what happens if an SQL query gets aborted in the middle, it probably gives an error rather than saving a partial data field. We will have backups of the database in the near future, but it is true that if you forsee this happening a lot it would probably be better to avoid editing over a bad connection. > I suggest we use CVS instead of Zope. > CVS can be used offline; work is not lost even when more than one > people are working on the same document at the same time; CVS is > errorproof; and things can be automated with CVS, on server-side as > well as on client-side. We already used CVS for Review. Now, the zope database hasn't replaced the flat HTML in CVS yet, nor am I going to force it to, but I assume there would be some use for being able to have multiple views of data, custom queries, and so on. I can't see CVS as a solution, but some other kind of version management in its place. -- David Manifold http://bespin.dhs.org/~dem/ This message is placed in the public domain. From soma@apex.net.au Fri, 16 Jun 2000 11:24:59 +1000 Date: Fri, 16 Jun 2000 11:24:59 +1000 From: Soma soma@apex.net.au Subject: Hello again people Well, I must say I rather liked the look of the arrow philosophy. If I may say, it's almost exactly what I have been dreaming up... When I tried to classify what I was inventing I too located it somewhere between a programming environmnent and artificial intelligence. I feel that the development of such a system would really make an impact on the usefulness of artificial intelligence systems. Incidentally, I came up with my ideas mostly by picturing how I structure information in my own brain and thought processes. I suppose the practical aspects of utilising ones own brain is what we are trying to model here. As we have seen with Sony's Aibo, artificial intelligence does not require language processing, however if we are ever going to make computers universally useful we need them to totally be able to understand our ways of communication. Neural nets and fuzzy logic are primarily tools of recognition, but until we can make computers understand language as we use it, they will remain unreliable and inscrutable. Just by way of introduction... From soma@apex.net.au Fri, 16 Jun 2000 11:24:59 +1000 Date: Fri, 16 Jun 2000 11:24:59 +1000 From: Soma soma@apex.net.au Subject: Hello again people Well, I must say I rather liked the look of the arrow philosophy. If I may say, it's almost exactly what I have been dreaming up... When I tried to classify what I was inventing I too located it somewhere between a programming environmnent and artificial intelligence. I feel that the development of such a system would really make an impact on the usefulness of artificial intelligence systems. Incidentally, I came up with my ideas mostly by picturing how I structure information in my own brain and thought processes. I suppose the practical aspects of utilising ones own brain is what we are trying to model here. As we have seen with Sony's Aibo, artificial intelligence does not require language processing, however if we are ever going to make computers universally useful we need them to totally be able to understand our ways of communication. Neural nets and fuzzy logic are primarily tools of recognition, but until we can make computers understand language as we use it, they will remain unreliable and inscrutable. Just by way of introduction... From soma@apex.net.au Fri, 16 Jun 2000 12:30:13 +1000 Date: Fri, 16 Jun 2000 12:30:13 +1000 From: Soma soma@apex.net.au Subject: On fleas on fleas Wheels within Wheels, Fleas on the backs of Fleas On the Backs of Fleas .... On the Backs of Dogs. Reflection is the theme of such classics as Alice in Wonderland, The Matrix, ExistenZ and The Thirteenth Floor. It's interesting how many such stories are about the place recently. The whole classification of cyberpunk is the cultural meme attached to the current cutting edge of this area (in fiction anyway) If I may observe, psychedelic drugs are a major influence on this scene, because of the way they facilitate reflection through transpersonal dis-identification. To go too far: Though the systems upon which human thought is built(the body and brain), are different to a computational simulation of such, they are identical on the level of what they do. Information is only in the ontological structure of a phenomena, no matter where you find it, the medium is not the message. There is no such thing as two entities sharing without a medium of exchange, a protocol or in other words, an interface. And finally, nothing in this universe is different in this respect. (yes, too far perhaps) (is this OT? Hmm..) Though objects such as hypotheses and rocks are different only in two respects: Rocks don't have a very big vocabulary or autonomy, and hypotheses (and ghosts and gods) only function through their believers. Though we can only know the world through language, can we say that this is what the world made of? When you think about it that way, it virtually is because we cannot see it in any other way. How do you distinguish between simulcra and 'reality'? Is there any real difference? If in every respect a simulation is accurate, is it not a world in a bubble? Isn't this the point of simulation? Therefore, AI is virtually just plain I, because we too are MADE from something (artifice). I suppose the difference is we are made by something which uses trial and error. However, computer science is working on how to allow computers to do just that. If you follow my thought process, then you must agree that at some point any given system evolves from some beginning which is above and outside itself reaching in and giving the system a bootstrap of some sort. I'm not saying 'god', but to use a cybernetic metaphor, there is probably an autonomous meta-program out there somewhere (outside of our universe) which goes about the place planting universe seeds. In fact it's probably self-replicating, and the whole purpose of all of it is to make things more and more and more and more and more and more ... um... Interesting? Complex? Diverse? Every entity, be it tangible or intangible (temporal or spatial), contains at its core the same seed, the same kernel, and to bring it back right down to the basic level, it is 'on'/'yang'/+ (etc) and 'off'/'yin/- (etc.), in a constant state of flux, switching at some phenomenal speed, really at infinite speed (and therefore you couldn't really say it was either or, but rather that it was both.) This is getting very Taoist isn't it? Anyway, this takes us back to something more like the topic: In the system I have imagined, all symbols are bound to their opposites and their aliases(synonyms and antonyms), and really they are a single unit, because you cannot consider a thing without considering what the thing is not. When someone says 'long', this doesn't make any sense unless it means 'longer than something else which is smaller' which is a reference loop. In my design I am using linking pairs to allow transparent dynamic loading, as well as dependency lists to manage the mapping of data from slow storage to memory. Each linking pair also has a small flag which indicates the status of the link, be it online (in memory) or offline (on disk). The other thing I want my system to implement is orthogonal persistence. I figured the best way to do it would be some kind of queue server/router, and making all communication between concurrent processes operate through it, and have it dump the data at an appropriate time interval to disk. What I am trying to achieve is a transparently interconnectable and hot-pluggable communications architecture that is always (in the time frame of the commit interval) able to be just loaded right back into memory after a power outage or whatever, and allowing very fast reboots, and the end of the 'save' and 'commit' commands (because it is irrelevant to the creative process, and impedes it when users are totally engrossed and forget about it or electrical systems fail). At present I have only just started building it, I am writing in C, but perhaps someone could suggest another language which will give at least 50 percent of the performance of C (at present I am short on processing resources). C is okay, but translation from an object model into a procedural model is tedious. I want someone to convince me why I would want to write in some other language. You guys (well, many of you) have played with this area, what do you use? From dem@tunes.org Thu, 15 Jun 2000 22:18:50 -0700 (PDT) Date: Thu, 15 Jun 2000 22:18:50 -0700 (PDT) From: Tril dem@tunes.org Subject: New mailing list system Today, I changed these things on the tunes server: Debian Slink OS -> Potato Sendmail Mail server -> Postfix Majordomo List Manager -> Sympa The good news is that mailing list delivery should go much quicker. The bad news is that the web archives are disabled. The mail is being archived, and you can retrieve it by sending mail to sympa@tunes.org with the body of "INDEX listname". Then send another mail with a body of "GET filename" to download an archive. This will change sometime and I do hope to have a web archive that contains both old and new archives, when I figure out the best way to do it. The old archives will remain up... I'm not sure it will stay this way, either. I'm just trying Sympa out, at Fare's suggestion. All the list subscribe/unsubscribe commands should still work, since the address majordomo@tunes.org is now an alias to sympa, and they are comaptible for subscribe and unsubscribe syntax. The web form, http://lists.tunes.org/cgi-bin/list-control, also seems to be working. Anyway, flames to off the list please. -- David Manifold http://bespin.dhs.org/~dem/ This message is placed in the public domain. From youlian@intelligenesis.net Fri, 16 Jun 2000 02:25:40 -0400 Date: Fri, 16 Jun 2000 02:25:40 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: On fleas on fleas Sounds cool. But it's still unclear (at least to me) what your code will do. Could you elaborate ? Thanx, Youlian -----Original Message----- From: root@draal.apex.net.au [mailto:root@draal.apex.net.au]On Behalf Of Soma Sent: Thursday, June 15, 2000 10:30 PM Subject: On fleas on fleas Wheels within Wheels, Fleas on the backs of Fleas On the Backs of Fleas .... On the Backs of Dogs. Reflection is the theme of such classics as Alice in Wonderland, The Matrix, ExistenZ and The Thirteenth Floor. It's interesting how many such stories are about the place recently. The whole classification of cyberpunk is the cultural meme attached to the current cutting edge of this area (in fiction anyway) If I may observe, psychedelic drugs are a major influence on this scene, because of the way they facilitate reflection through transpersonal dis-identification. To go too far: Though the systems upon which human thought is built(the body and brain), are different to a computational simulation of such, they are identical on the level of what they do. Information is only in the ontological structure of a phenomena, no matter where you find it, the medium is not the message. There is no such thing as two entities sharing without a medium of exchange, a protocol or in other words, an interface. And finally, nothing in this universe is different in this respect. (yes, too far perhaps) (is this OT? Hmm..) Though objects such as hypotheses and rocks are different only in two respects: Rocks don't have a very big vocabulary or autonomy, and hypotheses (and ghosts and gods) only function through their believers. Though we can only know the world through language, can we say that this is what the world made of? When you think about it that way, it virtually is because we cannot see it in any other way. How do you distinguish between simulcra and 'reality'? Is there any real difference? If in every respect a simulation is accurate, is it not a world in a bubble? Isn't this the point of simulation? Therefore, AI is virtually just plain I, because we too are MADE from something (artifice). I suppose the difference is we are made by something which uses trial and error. However, computer science is working on how to allow computers to do just that. If you follow my thought process, then you must agree that at some point any given system evolves from some beginning which is above and outside itself reaching in and giving the system a bootstrap of some sort. I'm not saying 'god', but to use a cybernetic metaphor, there is probably an autonomous meta-program out there somewhere (outside of our universe) which goes about the place planting universe seeds. In fact it's probably self-replicating, and the whole purpose of all of it is to make things more and more and more and more and more and more ... um... Interesting? Complex? Diverse? Every entity, be it tangible or intangible (temporal or spatial), contains at its core the same seed, the same kernel, and to bring it back right down to the basic level, it is 'on'/'yang'/+ (etc) and 'off'/'yin/- (etc.), in a constant state of flux, switching at some phenomenal speed, really at infinite speed (and therefore you couldn't really say it was either or, but rather that it was both.) This is getting very Taoist isn't it? Anyway, this takes us back to something more like the topic: In the system I have imagined, all symbols are bound to their opposites and their aliases(synonyms and antonyms), and really they are a single unit, because you cannot consider a thing without considering what the thing is not. When someone says 'long', this doesn't make any sense unless it means 'longer than something else which is smaller' which is a reference loop. In my design I am using linking pairs to allow transparent dynamic loading, as well as dependency lists to manage the mapping of data from slow storage to memory. Each linking pair also has a small flag which indicates the status of the link, be it online (in memory) or offline (on disk). The other thing I want my system to implement is orthogonal persistence. I figured the best way to do it would be some kind of queue server/router, and making all communication between concurrent processes operate through it, and have it dump the data at an appropriate time interval to disk. What I am trying to achieve is a transparently interconnectable and hot-pluggable communications architecture that is always (in the time frame of the commit interval) able to be just loaded right back into memory after a power outage or whatever, and allowing very fast reboots, and the end of the 'save' and 'commit' commands (because it is irrelevant to the creative process, and impedes it when users are totally engrossed and forget about it or electrical systems fail). At present I have only just started building it, I am writing in C, but perhaps someone could suggest another language which will give at least 50 percent of the performance of C (at present I am short on processing resources). C is okay, but translation from an object model into a procedural model is tedious. I want someone to convince me why I would want to write in some other language. You guys (well, many of you) have played with this area, what do you use? From soma@apex.net.au Fri, 16 Jun 2000 18:29:53 +1000 Date: Fri, 16 Jun 2000 18:29:53 +1000 From: Soma soma@apex.net.au Subject: On fleas on fleas Youlian Troyanov wrote: > Sounds cool. But it's still unclear (at least to me) what your code will > do. Could you elaborate ? > > Thanx, > Youlian > I'm not sure where to start... I'll start by spouting a bunch of buzzwords: I'm calling it Simulcra (for some reason I feel it wants a 'The' in front of it). It will be a system similar in intention to the system described in 'The Arrow Philosophy' (I reccommend reading it... Haven't finished reading it yet, but it's really good) Using a tree-graph type linking to join symbols with their contexts (In truth every idea exists inside the context of another idea, except for the idea of the idea, which is self-referential( reflective)). This data structure can contain or reference every type of data the system can use, it doubles as a state machine (grammar recognition), it stores user files, it will store the system itself, and it will store language modules, allowing translations between languages (both human and computer - and whatever else people think of). It is not intended to be AI, but rather a cybernetic assistant, who understands your idiosyncrasies because it adds your set of aliases for things to its lexicon, and theoretically could enable anyone to program. The other thing the system will do is orthogonal persistence. This will be made possible through a communication system which will use some kind of tag structure (perhaps addresses to symbols referenced), and it can transparently add other (I will call them pipes to confuse everyone) pipes that may be operating over ethernet or whatever you want, transparently to the programs. Persistence will be provided by a central exchange system (we will call it a switch) which performs two functions: One is to regularly write its buffer to disk in the event of system failure. The other function of the switch is to finally send the data to its desired storage location (file) Actually it does these things at the same time. Apple's iBook has a function where it dumps memory to disk and then upon power up it dumps the data back into memory and it starts up almost instantly. I wish to provide such a facility transparently to the user. To enable this, first all temporary data is periodically committed to disk, and likewise the process list of the executive will also be dumped periodically too. To allow the storage of temporary data, all running processes variable and memory manipulations are tied together by the previously mentioned linked web type structure associated with each process, where each module modified sends a message to the switch notifying it of the changes, and at the interval the switch commits data, it will add to its list the changes in the data. When a process changes one piece of data a number of times, during the interval, it doesn't matter how many times, it only dumps the state as it stands on the commit interval. Anyway, that's what I'm thinking at the moment. There's an interesting OS project that is working on making persistence as efficent as possible, but I am not too concerned about this. I will probably include profiling in the system too (I am quite obsessed with efficency of code actually). The way I will do this is by using a incremental optimisation function which passes over each piece of executed machine code, looking for redundancies and waste and though it might start out inefficient, the more code runs the more efficent the system will make it. (I got that idea from the crusoe processor) It would be set to do it every ten or twenty executions or something like that. The other thing about this OS is that it will be modular right down to the level of individual functions, where only one name is associated with one process, though the process may call on others and thereby abstract them into itself. The purpose of this is to eliminate all redundant code. I am presently CVSing ExoPC, and the idea of dumping the layering of an OS appeals to me, as though I haven't witnessed it or looked at much in the way of benchmarks, it makes sense to me. Administration of data is the most wasteful activity that a system does - I will try to make all such parts of my system as svelte as possible. One of the desired goals of my system is to dump every other system of database storage and manipulation in favour of a single set of functions with a more broad application: hence every resource the computer can access is tied together by the same system of DBM, so that I only have one code set to optimise, rather than ten or twenty. (Explain you say:-) Well, the average OS has a mechanism for storing its executive process information, and a separate DBMS for the filesystem, and yet another one for the dynamic loader, and yet another one for compiler state machines, and yet another one for ... um. You get the picture anyway. I am not saying I can do it any faster than anyone else determined to do it fast, rather that I will make it as simple and broadly applicable as possible so as to reduce the complexity and size of the system. Simplicity also pays off in maintenance costs: it means there is much less you need to maintain. Simplicity and elegance are the primary goals of this system. Anyway, that takes me back to an unanswered question To quote myself: > > At present I have only just started building it, I am writing in C, but > perhaps someone could suggest another language which will give at least > 50 percent of the performance of C (at present I am short on processing > resources). C is okay, but translation from an object model into a > procedural model is tedious. > > I want someone to convince me why I would want to write in some other > language. You guys (well, many of you) have played with this area, what > do you use? Um, I can't remember what it was about, but Pliant looked interesting. The other thing is that I may soon be able to burn processing time (When I get a job finally, and a decent computer), and then I may not care. This is just about the definitive description at the moment... I really should write a paper on it or something. From ignaz@navy.org Sat, 17 Jun 2000 07:07:58 +0100 Date: Sat, 17 Jun 2000 07:07:58 +0100 From: Ignaz Kellerer ignaz@navy.org Subject: On fleas on fleas Am 16-Jun-00 schrieb Soma: > And finally, nothing in this universe is different in this respect. > (yes, too far perhaps) (is this OT? Hmm..) Though objects such as > hypotheses and rocks are different only in two respects: Rocks don't > have a very big vocabulary or autonomy, and hypotheses (and ghosts > and gods) only function through their believers. No, not ghosts and gods. If a god (or gods) exist, he is not dependant on any believers. He can do without them as well. But most (if not all) beliefs in ghosts or gods are hypotheses, that is right. > Though we can only know the world through language, can we say that > this is what the world made of? When you think about it that way, it > virtually is because we cannot see it in any other way. I think we can. There are surely things we do feel but we cannot tell about that feelings. But maybe this is simply because we are not as skilled in our language? :-) Even if you see "seeing", "smelling", etc. as a kind of language, I think we can enhance our world's view by thinking or feeling. > How do you distinguish between simulcra and 'reality'? Is there any > real difference? If in every respect a simulation is accurate, is it > not a world in a bubble? Isn't this the point of simulation? Yes. And you can populate a simulated world with characters. They don't really act on their own since all we can do is either automate them (to enforce them to act, making them roboters), or steer them ourselves. And if you steer such a character, you can feel that: - Some of the simulated worlds are "closed". You can touch the world and change their properties or the actions taken in the world, but the characters in it cannot see that there is existing anything outside their world, and as well they cannot see that you steer anything in it, although you do. - A few of the simulated worlds are "open" and include us as the simulated world's gods. In those worlds, the characters know that there is existing something that made their world and steers the world. Depending on the world, it may be either difficult for a character to gather that information, or obvious. > Therefore, AI is virtually just plain I, No. AI is always forcing action since AI is a program. AI actions are "born" to a character and the character neither can steer AI-forced actions nor can he think of steering them. Just try to steer your heartbeat! Or for a more complex example, try to hold your breath until your limbs are blue! Our "breath-holding AI" activates itself whenever we try to hold our breath for a given time. Rememer, AI is just a program and does have nothing to do with "Intelligence", whatever that is. AI is a part of the human body, but not "I". I CAN make decisions. Or do you tell me that AI forced you into working with TUNES? > Therefore, AI is virtually just plain I, because we too are MADE > from something (artifice). I suppose the difference is we are made > by something which uses trial and error. However, computer science > is working on how to allow computers to do just that. Very unlikely. There are a lot of parts in us that would not have evolved if we were just a product of evolution, or just trial and error. Is playing and enjoing music essential for staying alive? Would we pay any attention to the birds' songs or the scent and beauty of flowers if we were made by evolution? Or our ability to do reflection? > If you follow my thought process, then you must agree that at some > point any given system evolves from some beginning which is above > and outside itself reaching in and giving the system a bootstrap of > some sort. > > I'm not saying 'god', but to use a cybernetic metaphor, there is > probably an autonomous meta-program out there somewhere (outside of > our universe) which goes about the place planting universe seeds. In > fact it's probably self-replicating, and the whole purpose of all of > it is to make things more and more and more and more and more and > more ... um... Interesting? Complex? Diverse? > > Every entity, be it tangible or intangible (temporal or spatial), > contains at its core the same seed, the same kernel, and to bring it > back right down to the basic level, it is 'on'/'yang'/+ (etc) and > 'off'/'yin/- (etc.), in a constant state of flux, switching at some > phenomenal speed, really at infinite speed (and therefore you > couldn't really say it was either or, but rather that it was both.) I don't like that. Mathamatically, switching between "on" and "off" at a phenomenal speed is the same as a value between zero and one, maybe changing with time; but I prefer to think of a real number being between 0.0 and 1.0. It seems more basic to me than switching between 0 and 1 at a phenomenal speed. And it is easier to calculate with. And there surely are dimensions in a world's view that use other type of dimensions that you describe. The "time" dimension, for a very exotic example, is a continual dimension that does have markers which move forward (you cannot go back in time). Most dimensions are non-continual. And the time dimension seems to have a beginning and no end. > Anyway, this takes us back to something more like the topic: In the > system I have imagined, all symbols are bound to their opposites and > their aliases(synonyms and antonyms), and really they are a single > unit, because you cannot consider a thing without considering what > the thing is not. Which synonym, and which antonym? A symbol may have different synonyms or antonyms, depending on which dimension of the system's view you decide to look at. Most people say "short" is an antonym to "long", but another valid view is that "broad" or "thick" is an antonym to "long", depending on which view of the symbol you look at. Most symbols do have quite a number of antonyms, which aren't synonyms to each other; or they may have quite a number of synonyms, which aren't synonyms to each other, because they emerge from different views of the symbol. -- Ignaz Kellerer // _Ignaz@navy.org___/ http://www.navy.org/ \__ irc: Acrimon \X/ Amiga is alive! \ /home/Ignaz@navy.org / From fare@tunes.org Tue, 20 Jun 2000 04:38:21 +0200 Date: Tue, 20 Jun 2000 04:38:21 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: TUNES development startup Dear Tunespeople, I'm in contact with various people to found a startup company that would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. We're particularly trying to define 1) a business plan. 2) a development schedule. The idea would be to raise funds thanks to an early prototype, so that the path to such a prototype is important, and advice is sought. Once funding is found, we can afford hiring some people almost full time on the project with a decent salary... Yours freely, [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] Brain, n.: The apparatus with which we think that we think. -- Ambrose Bierce, "The Devil's Dictionary" From alaric@alaric-williams.com Tue, 20 Jun 2000 04:09:26 +0100 (BST) Date: Tue, 20 Jun 2000 04:09:26 +0100 (BST) From: Al Williams alaric@alaric-williams.com Subject: TUNES development startup On Tue, 20 Jun 2000, Francois-Rene Rideau wrote: > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. Wow! That's brilliant news. Congratulations! > We're particularly trying to define > 1) a business plan. > 2) a development schedule. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. Perhaps one of the areas TUNES has been weak in so far (IMHO) is explaining the kinds of markets that would initially accept it. Fare's original mention of a sample application, the media database (was it CDs or vidoes? I remember not) is insufficient for this task; no "end user" would feel a driving urge to install something as vague as a "reflective programming system", yet it would not be doing TUNES justice to sell it as a CD database! I would say that you should target bespoke software developers, who are highly technically competent, and are looking for a system that will help them deliver reliable, efficient, powerful software systems on budget. In that market, broad platform support is a definite plus, since they will want to be able to produce software that can run on whatever cranky hardware their client forces them to use. Ease of installation and administration will also be plus points, since they will want to be able to easily put a TUNES "runtime library" onto the client's machine, followed by whatever files define their product in terms of the core TUNES system. It'd be nice if these were just a "TUNES runtime" binary, a "TUNES core library" file or directory, and then file containing the application. PHP3 applications, for example, are a bitch to configure, usually requiring customation of httpd.conf, php3.ini, and fields in a per-application configuration file for base paths, base URLs, administrator email addresses, SQL server login details, the creation of said SQL database and its loading with initial data... ideally, a TUNES application should be installable onto a blank system by putting a core TUNES package in one directory, maybe setting up its base path in some kind of "tunes.conf" file or registry key, and then dumping an easily created "snapshot" of the application into another directory and creating a script with the appropriate paths in it. ABW -- http://RF.Cx/ http://www.alaric-williams.com/ http://www.warhead.org.uk/ alaric@alaric-williams.com ph3@r mI sk1llz l3st I 0wn j00 From youlian@intelligenesis.net Tue, 20 Jun 2000 12:26:29 -0400 Date: Tue, 20 Jun 2000 12:26:29 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: TUNES development startup Gongrats. Let's hope you attract as much cool people from here as you can, Brian for example. Gee, it would be a dream job. Pity I haven't written even a single line in Lisp. Could I still be the coffe boy ? Best, Youlian > -----Original Message----- > From: tunes-owner@tunes.org [mailto:tunes-owner@tunes.org]On Behalf Of > Francois-Rene Rideau > Sent: Monday, June 19, 2000 10:38 PM > To: tunes@tunes.org > Subject: TUNES development startup > > > Dear Tunespeople, > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. > We're particularly trying to define > 1) a business plan. > 2) a development schedule. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. > Once funding is found, we can afford hiring some people almost full time > on the project with a decent salary... > > Yours freely, > > [ François-René ÐVB Rideau | Reflection&Cybernethics | > http://fare.tunes.org ] > [ TUNES project for a Free Reflective Computing System | http://tunes.org ] Brain, n.: The apparatus with which we think that we think. -- Ambrose Bierce, "The Devil's Dictionary" From errormag@everest.netidea.com Tue, 20 Jun 2000 09:58:25 -0700 (PDT) Date: Tue, 20 Jun 2000 09:58:25 -0700 (PDT) From: Shayne Kasai errormag@everest.netidea.com Subject: TUNES development startup Congradulations, I think it's about time this project gains some momentum. I have a suggestion, have a look at funding from the government. I do know that the provincial governments of canada (well, for british columbia) they have special technology funding, so you may want to look into that. I also know that Telus (the phone company) is offering a load of money for community based projects. =20 Where ever you look there's companies/gov't willing to dump your tax dollars into technology projects :-) have a look at them. -- SHayne >=20 > > -----Original Message----- > > From: tunes-owner@tunes.org [mailto:tunes-owner@tunes.org]On Behalf Of > > Francois-Rene Rideau > > Sent: Monday, June 19, 2000 10:38 PM > > To: tunes@tunes.org > > Subject: TUNES development startup > > > > > > Dear Tunespeople, > > I'm in contact with various people to found a startup company that > > would develop TUNES. Any advice, contact, funding, idea, etc, is welcom= e. > > We're particularly trying to define > > 1) a business plan. > > 2) a development schedule. > > The idea would be to raise funds thanks to an early prototype, > > so that the path to such a prototype is important, and advice is sought= =2E > > Once funding is found, we can afford hiring some people almost full tim= e > > on the project with a decent salary... > > > > Yours freely, > > > > [ Fran=E7ois-Ren=E9 =D0VB Rideau | Reflection&Cybernethics | > > http://fare.tunes.org ] > > [ TUNES project for a Free Reflective Computing System | > http://tunes.org ] > Brain, n.: > The apparatus with which we think that we think. > =09=09-- Ambrose Bierce, "The Devil's Dictionary" >=20 >=20 From kend0@earthlink.net Tue, 20 Jun 2000 17:04:11 -0700 Date: Tue, 20 Jun 2000 17:04:11 -0700 From: Kenneth Dickey kend0@earthlink.net Subject: Metaprogramming & language design wrote: > > Suppose you want to find all calls to some routine. An integrated > > development environment like Smalltalk allows you to do this search via > > metaprogramming. I agree that this is a good thing. On the other hand, the > > compiler in Smalltalk can be modified (each class can use a different > > Compiler class to compile the class's methods). That is probably a bad > > thing, because it means the code that searches through the library for all > > calls to some routine might not work any more. Or maybe the browsers stop > > working, or exception handling breaks because of assumptions about the > > compiler, etc. If you define a clean, generic protocol for the Compiler > > class you can avoid some of these problems, but this is just language > > design! > > > > I'm in favor of lots and lots of metaprogramming. I'm *not* in favor of > > leaving the system so open to fundamental changes that you can no longer > > trust a single line of code to do what you expect it to. Some kinds of > > metaprogramming are this severe. > Two quick observations. [1] Developing development systems can be done by holding the 'outer' development environment tobe safely sealed off while building the 'inner'/new environment. E.g. see Matthew Flatt, et. al. "Programming Languages as Operating Systems (or Revenge of the Son of the Liusp Machine)", CIFP '99/SIGPLAN Notices 34(9), September 1999. In the PLT/MzScheme environment it is possible to contain 'event spaces', 'io spaces' and 'custodians' (per-process resource control). [Note that in this case the compiler and object model were invariant, but a good approach]. I.e. events, resources, and i/o ports can be safely cleaned up after some disaster in the inner/test context (e.g. a new IDE) without affecting the outer/stable IDE. [2] Dave Simmons of QKS introduced scoped symbols into SmalltalkAgents. I.e. when dispatching occurs one dispatches on the selector, the object/arguments, AND the symbol-space/scope of the selector. Dave says (private communication) that this allows him to run multiple versions of classes/methods concurrently within a single Smalltalk workspace. [Each 'override set' has its own scoped version/symbol space]. This allows, e.g. multiple version of a method to be used without 'confusing' the objects using them ('older' objects use the older version, newer objects the newer). The crux in both cases is to be crisp on what is to be held invariant and have a mechanism to 'seal off' some design/runtime aspect to preserve a 'sanity invariant'. $0.02, -KenD From core@suntech.fr Wed, 21 Jun 2000 08:17:19 +0200 Date: Wed, 21 Jun 2000 08:17:19 +0200 From: Emmanuel Marty core@suntech.fr Subject: TUNES development startup Francois-Rene Rideau wrote: > Dear Tunespeople, > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. > We're particularly trying to define > 1) a business plan. > 2) a development schedule. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. > Once funding is found, we can afford hiring some people almost full time > on the project with a decent salary... How much funding are you looking at, and for how long of a period? What would be exactly the short-term goal of the startup? (I know you will answer all of that in the BP, but you must have some sort of idea). Take notice that we're about to enter the 'dead period' - ie. the summer ; after the end of june, most VC's will probably not handle business plans until september again. I suppose you need that much time to wrap it up anyway.. There are quite a few good, imaginative VC's who understand technology, on the parisian place, if you setup a real project and team you should make it. -- Emmanuel From idiot@slack.net Thu, 22 Jun 2000 17:51:38 -0400 Date: Thu, 22 Jun 2000 17:51:38 -0400 From: Matt Miller idiot@slack.net Subject: Meta-text. Having some free time and a nee dto create, I have again been looking at the Tunes info. I am interested in the meta-text aspect of the Review subproject. This seems like a fairly straight-forward, yet challenging design project. Has their been any discussion which is not captured on the pages as to how to proceed? Should I be subscribed to/sending this to the review subproject? (It think I used to be subscribed, but I haven't seen any traffic) Thank- Matt From alaric@alaric-williams.com Thu, 22 Jun 2000 23:29:13 +0100 (BST) Date: Thu, 22 Jun 2000 23:29:13 +0100 (BST) From: Al Williams alaric@alaric-williams.com Subject: Metaprogramming for the Masses (Crossposted) WARNING: crossposted. Please do not reply to this email. My final year university project report is complete. It's a design for a new architecture of programming language, heavily based around "metaprogramming", which in this case means writing in a language which uses itself as a macro language... http://love.warhead.org.uk/~alaric/STFFAP.pdf http://love.warhead.org.uk/~alaric/STFFAP.ps -- http://RF.Cx/ http://www.alaric-williams.com/ http://www.warhead.org.uk/ alaric@alaric-williams.com ph3@r mI sk1llz l3st I 0wn j00 From m.dentico@galactica.it Fri, 23 Jun 2000 19:56:53 +0200 Date: Fri, 23 Jun 2000 19:56:53 +0200 From: Massimo Dentico m.dentico@galactica.it Subject: TUNES development startup Francois-Rene Rideau wrote: > > Dear Tunespeople, > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. > We're particularly trying to define > 1) a business plan. > 2) a development schedule. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. > Once funding is found, we can afford hiring some people almost full time > on the project with a decent salary... > > Yours freely, > > [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] > [ TUNES project for a Free Reflective Computing System | http://tunes.org ] > Brain, n.: > The apparatus with which we think that we think. > -- Ambrose Bierce, "The Devil's Dictionary" I'm writing a reply with my proposals. It's quite long, beyond my precedent estimation: I need to discuss various aspects of the project, political, organizational and technical. Unfortunately I'm really slow to write in English. I can anticipate that I'm very interested to support this evolution of the project. Definitively, I want and need a free Tunes-like system. Besta regards. -- Massimo Dentico From youlian@intelligenesis.net Fri, 23 Jun 2000 15:02:40 -0400 Date: Fri, 23 Jun 2000 15:02:40 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: Totally off topic A friend of mine is writing an article about Russian software talents. If someone here have suggestions please contact me. Thanx, Youlian From jecel@tunes.org Fri, 23 Jun 2000 20:13:10 -0300 Date: Fri, 23 Jun 2000 20:13:10 -0300 From: Jecel Assumpcao Jr jecel@tunes.org Subject: TUNES development startup On Mon, 19 Jun 2000, Faré wrote: > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. Considering that I have started two companies (in 1986 and in 1999), you might want to take my advice. Considering that I haven't yet made any money at all, you might prefer to ignore me ;-) > We're particularly trying to define > 1) a business plan. This is critical. It is important to note that the OS market is dead, so positioning Tunes as an OS would be fatal. Be has released BeOS 5 for free and Lucent has done the same for Plan 9. QNX is going the same route. Linux is, of course, the cause for all this. I think only Microsoft is still able to make money on OSes, but even that shouldn't last more than two years or so (if the company does get split up, I wouldn't want to end up in the OS half...). My business plan is to make money selling hardware, so I can give the OS (Self/R) away for free (including for other people's hardware, PCs, so mine has to be really good in order to sell!). In short - a good business plan would be to have a great product that would use Tunes as an enabling technology. Another advice is to find good names for things. The Transmeta people couldn't raise any money to work on "dynamic binary translation". All they had to do was renamed this to "code morphing" and the venture capitalists suddenly became very interested! > 2) a development schedule. I have a nice Pert chart on my wall showing how I would be finished in September of 1987. But even so, I have some advice: check out the Extreme Programming web sites (like http://www.extremeprogramming.org/ and http://www.xprogramming.org) for a very reasonable development and planning method. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. How would this relate to the various "Tunes fringe" development projects? > Once funding is found, we can afford hiring some people almost full > time on the project with a decent salary... This is what I am trying to achieve as well, but I am trying to get funding from early customers instead of investors (not that I have much choice, down here in Brazil). This is a big step and I wish you the best of luck in this. I want to help in any way that I can. -- Jecel P.S.: I sent this email on Tuesday, but replied only to Faré instead of the list by accident. Today I was reading an article about trusted system and Open Source and was thinking that this could be a good niche for Tunes. With its proof systems and formal specifications, it could overcome the problems mentioned: http://slashdot.org/article.pl?sid=00/06/21/1333222&mode=thread From idiot@slack.net Fri, 23 Jun 2000 22:31:57 -0400 Date: Fri, 23 Jun 2000 22:31:57 -0400 From: Matt Miller idiot@slack.net Subject: Meta-text. On Thu, Jun 22, 2000 at 05:51:38PM -0400, Matt Miller wrote: > > the review subproject? (It think I used > to be subscribed, but I haven't seen any traffic) > Ok, zero response. Is an XML DTD with suitable browser tools sufficient? From lmaxson@pacbell.net Sat, 24 Jun 2000 08:20:29 -0700 (PDT) Date: Sat, 24 Jun 2000 08:20:29 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: One step at a time Like several others on this mailing list I have pursued a project on my own that parallels Tunes. I, too, want "...to replace existing Operating Systems, Languages, and User Interfaces by a completely rethought Computing System...". After that point we start to differ in priorities. For example at the end of this quote comes "...yet (eventually) a highly-performant set of dynamic compilation tools...". This implies a shift in "system" priorities, a consideration of features separate from their implementation. That from my perspective is in error. I can only assume that it accounts for the several references in the Tunes HLL requirements list to the effect that "computer will this" or the "computer will do that". As one who has practiced in this profession for 44 years I have yet to find a computer that did anything except what it was told in software. What the software told it was in turn traceable to human direction. Thus I cannot on the one hand propose wanting "this" without a clear directive of getting from here to there. In short a systemic approach in which all inter-dependencies, inter-connections progress as a whole. What I will concede and hold in common with Tunes is the need to shift drastically the division of labor in software development and maintenance from people to software. Ultimately this division leads to "letting people do what software cannot and letting software do what people need not". This same division of labor we have successfully applied to client processes, allowing their productivity to proceed apace with advances in technology. We have dismally failed in applying our own methods to ourselves as clients. That doesn't mean we haven't tried. We have had our share of evolutionary success in the "structured methodologies": structured (peer) reviews, structured programming, and structured analysis and design. Apparently among the Tunes advocates we should attach the same level of success to object-oriented methods. In spite of all these successes software technology has lagged hardware and software development continues its spiraling costs in time an money. The question is why? The answer is simple. Our tool set, our means of implementing software. Now we undertook a major effort in the 80's to get the different vendor tools to operate in a seamless manner, leaving it only the the users to pick and choose, to plug and play from the vendor-supplied tool set. That effort was initiated by IBM in its AD/Cycle project which ultimately received a failing grade after investing some millions of man-hours and dollars. The point is not to belittle an effort whose participants were ever much as serious as those of Tunes, but to understand what went wrong. The shortest answer is the logical implication of success. For most vendors success would have meant open competition, no proprietary (protected) niches, and most importantly a shrinking marketplace. You can talk all you want about having software technology track that of hardware, but the productivity consequences are that fewer people become necessary for greater volume. If we achieve it, we will sharply reverse the supply/demand equation from today's imbalance toward demand. To be even clearer we will have a supply ability that will exceed demand. That in turn leads to a reduction in supply to bring the two more closely into balance. Quite frankly it is quite simple to place software technology on track with hardware. For us it is the development of a single tool, a single language, and a single user interface (one with the tool). In fact such a tool is part of the Tunes requirement though buried within the esoteric "Interactivity" feature. As a tool, as something which "seeks" to assume ever more of the clerical ("that which people need not") work, leaving people free for more creative ("that which machines cannot") pursuits, it forms a symbiotic system with the user. The user extends its capabilities as it does to the user in return. Together they form a system whose direction derives from the user, for whom the tool is an "intelligent" assistant. The tool then must assume all the Tunes requirements. To be self-extensible it must be self-defining. Here I part with another of the Tunes "assumptions". While agreeing that an HLL is a high level language I disagree that it is a high level (programming) language. For me it is a high level (specification) language, a HLL language that is self-defining, self-extensible, and most importantly self-sufficient, i.e. never requiring the interceding of another language. This means that within its source form it can encompass all aspects of any hardware or software system. As a specification language it means containing the specifications of any machine architecture for which we will execute software. Thus at the lowest levels of abstraction we will have the various target machine architectures with a selectable connectivity (a path) to any higher level abstraction (levels of abstraction). I have such a language and a such a tool in mind. As far as I can tell they meet the Tunes requirements. As a product, as a "tool set", they make "open software" under the FSF look closed. There are no need for restrictions on use, not need for standards, and the inability to create incompatibilities while remaining within the syntax. It does not require protection because it cannot be violated. If you want to actually engage in "complete rethinking", I would suggest that you do so. That means starting with as clean a set of assumptions as possible. That applies to programs (as human productions), to objects, and to source forms. I would also suggest that you not belittle other systems, but have a clear understanding of what is right and what is not so right within them. Then you can profitably engage in a culling process for the "right" and a corrective one for the "not so right". Every one of them is a serious attempt by their author(s) and advocate(s) to do at least in part what we are attempting. In that manner we can continue as part of a "common cause". From kyle@arcavia.com Sat, 24 Jun 2000 12:28:55 -0400 Date: Sat, 24 Jun 2000 12:28:55 -0400 From: Kyle Lahnakoski kyle@arcavia.com Subject: One step at a time > The tool then must assume all the Tunes > requirements. To be self-extensible it must be > self-defining. Here I part with another of the > Tunes "assumptions". While agreeing that an HLL is > a high level language I disagree that it is a high > level (programming) language. For me it is a high > level (specification) language, a HLL language that > is self-defining, self-extensible, and most > importantly self-sufficient, i.e. never requiring > the interceding of another language. I agree with this statement here. Any programming language should be a subset of the HLL. A programming language with all the complexities of the HLL would be too dynamic or complex for humans to understand. It is better to see the HLL as a set of domain specific languages. > This means that within its source form it can > encompass all aspects of any hardware or software > system. As a specification language it means > containing the specifications of any machine > architecture for which we will execute software. > Thus at the lowest levels of abstraction we will > have the various target machine architectures with > a selectable connectivity (a path) to any higher > level abstraction (levels of abstraction). I would assume that you mean the HLL will be able to describe the specification for any hardware, and describe what the optimizations measures are for the hardware. Furthermore I believe you are saying that a hardware specification language is a subset of HLL. I have been doing a little work on the high level description of low level operations. > If you want to actually engage in "complete > rethinking", I would suggest that you do so. That > means starting with as clean a set of assumptions > as possible. That applies to programs (as human > productions), to objects, and to source forms. I am an incremental fellow. I believe a complete rethink is not needed; any starting system can be evolved into Tunes, and like a caterpillar, can shed its origins eventually. I will admit that evolution is usually slower than revolution, but at least there is a working system at all stages. Anyway, a Tunes revolution will never be realized, starting from scratch will have as many code rewrites as an evolved system just because Tunes has no formal specification. I would like to hear Water's thoughts on this point. His work on the Arrow system should have given him the experience to comment on the work required to do a complete rethink. I would also like to hear from others that have started from scratch and tell their stories. > I would also suggest that you not belittle other > systems, but have a clear understanding of what is > right and what is not so right within them. Then > you can profitably engage in a culling process for > the "right" and a corrective one for the "not so > right". Every one of them is a serious attempt by > their author(s) and advocate(s) to do at least in > part what we are attempting. In that manner we can > continue as part of a "common cause". This last paragraph is what prompted me to reply. I find the language reviews to not be helpful at all. They do not specifically show the good and bad points of each language. Such a discussion would go into the semantics and syntax of the language and compare to others. I would like to rewrite the language reviews. I would need everyone's help here because I do not know most of these languages. For example, Haskell is one I am learning now, and the type system is quite a complex unfamiliar beast. But I wonder about how powerful static typing is, and I doubt it would handle well in a system with dynamic types. ---------------------------------------------------------------------- Kyle Lahnakoski Arcavia Software Ltd. (416) 892-7784 http://www.arcavia.com From lmaxson@pacbell.net Sat, 24 Jun 2000 12:43:59 -0700 (PDT) Date: Sat, 24 Jun 2000 12:43:59 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: One language or many Kyle Lahnakoski wrote: " It is better to see the HLL as a set of domain specific languages." and "Furthermore I believe you are saying that a hardware specification language is a subset of HLL.". We had best agree to disagree on this point. I propose only a single specification language that covers the entire gamut or domain of software activities. This misnomer between a specification language and a programming language is one we need to examine to make clear (or clearer) the distinction. It began with Iversons APL (A Programming Language), intended as a specification language (written only, not executable) to replace flowcharting, and continued with Prolog to make a specification language "behave" as a programming language. In my system all programming is done by the tool, the Developer's Assistant, based upon an input set of specifications. That it works for a specification language I can refer to two computer architectures encoded in APL, the earlier Stretch 7080 (in 1962) and the more remarkable S/360 (in 1964). Moreover if you refer to the Intel reference manuals on its processors, more recently the Pentium family of processors, you will find a HLL-encoded "specification" of every instruction. As you can see it is not new as it has a nearly 40 year history. At one time within Burroughs was a proposal to make APL the "standard specification" language for all software and hardware. One language which covered the hardware and software domains from the lowest level (hardware encoded logic) to an unlimited number of higher levels. Burroughs corporate management turned it down. Since that time we have seen the rise of AI and logic programming, the most prominent remnant of which is Prolog. As a reference I could suggest "Simply Logical: Intelligent Reasoning by Example" by Peter Flach. Though ostensibly a tutorial on Prolog, it nevertheless serves as one on logic programming as well. If we back up a bit, broaden our vision somewhat, we will see that software development is a process. It takes as input user requirements, transforming (translating) them into specifications. That's the first step within the process (as the requirements come from outside it). Even the object-oriented advocates had to precede the programming (which occurs in the construction stage) with UML. The point is that something occurs prior to programming and the use of a programming language. If you will accept the process of specfication, analysis, design, construction, and testing, then the question becomes what is it that you respecify in the later stages? Why would you ever respecify anything except as part of the specification stage? If you do not, if that is the only writing the user does (write specifications), then you turn the implications of that writing to the tool, the assistant, who performs the remaining stages automatically. That includes automated analysis, design, construction, and testing all under control of the chosen set of specifications. In short there is no (people) programming nor programming language. The only language controlling each of the successive stages is the specification language. The logic engine (or engines) within the tool provide the implications of the specifications. They proceed asychronously, incrementally, and in parallel. At any instance they reflect (present) the "state" of the system defined by the specifications. The language is specified within itself (self-defining). The tool is specified with the language. Both the tool, the language, and the source for both constitute the product received by the user with absolutely no strings attached to its use. None of the restrictions of "open source" as none are necessary to insure (protect) its "openness". So there is only one language and one tool from which all the remaining "desirables" derive. From kyle@arcavia.com Sat, 24 Jun 2000 17:46:26 -0400 Date: Sat, 24 Jun 2000 17:46:26 -0400 From: Kyle Lahnakoski kyle@arcavia.com Subject: [Fwd: Re: One step at a time] Here is a message I received from Jason Marshall -------- Original Message -------- Subject: Re: One step at a time Date: Sat, 24 Jun 2000 11:49:46 -0700 (PDT) From: Jason Marshall Reply-To: jmarsh@serv.net To: kyle@arcavia.com > > If you want to actually engage in "complete > > rethinking", I would suggest that you do so. That > > means starting with as clean a set of assumptions > > as possible. That applies to programs (as human > > productions), to objects, and to source forms. > > I am an incremental fellow. I believe a complete rethink is not needed; > any starting system can be evolved into Tunes, and like a caterpillar, > can shed its origins eventually. I will admit that evolution is usually > slower than revolution, but at least there is a working system at all > stages. Anyway, a Tunes revolution will never be realized, starting > from scratch will have as many code rewrites as an evolved system just > because Tunes has no formal specification. If you look at what languages have enjoyed the strongest userbases over the last twenty years, I think you'll find that the average programmer can, in fact, only handle evolutionary changes to their environment. "Saving the world" only works if you can actually get "the world" to use your language. I have a personal theory that it is not at all a coincidence that C looks a lot like assembly, C++ looks a lot like C, and Java looks a lot like C++. (VB comes from childhood exposure to BASIC, but it's not polite to talk about that). People seem to need to fit part of the new system onto their mental map of the old system. If this is true, then for functional programming to go mainstream, someone first has to develop an OOL that borrows significantly from functional programming and C++ or Java. > > I would also suggest that you not belittle other > > systems, but have a clear understanding of what is > > right and what is not so right within them. Then > > you can profitably engage in a culling process for > > the "right" and a corrective one for the "not so > > right". Every one of them is a serious attempt by > > their author(s) and advocate(s) to do at least in > > part what we are attempting. In that manner we can > > continue as part of a "common cause". > > This last paragraph is what prompted me to reply. I find the language > reviews to not be helpful at all. They do not specifically show the > good and bad points of each language. Such a discussion would go into > the semantics and syntax of the language and compare to others. > > I would like to rewrite the language reviews. I would need everyone's > help here because I do not know most of these languages. For example, > Haskell is one I am learning now, and the type system is quite a complex > unfamiliar beast. But I wonder about how powerful static typing is, and > I doubt it would handle well in a system with dynamic types. I think I could manage a pretty thorough critique of Java. Regards, Jason Marshall > > ---------------------------------------------------------------------- > Kyle Lahnakoski Arcavia Software Ltd. > (416) 892-7784 http://www.arcavia.com > > From soma@apex.net.au Sun, 25 Jun 2000 10:28:55 -0700 Date: Sun, 25 Jun 2000 10:28:55 -0700 From: Soma soma@apex.net.au Subject: One step at a time Lynn H. Maxson wrote: > Like several others on this mailing list I have > pursued a project on my own that parallels Tunes. > I, too, want "...to replace existing Operating > Systems, Languages, and User Interfaces by a > completely rethought Computing System...". After > that point we start to differ in priorities. For > example at the end of this quote comes "...yet > (eventually) a highly-performant set of dynamic > compilation tools...". This implies a shift in > "system" priorities, a consideration of features > separate from their implementation. Here Here. I am kind of responding to Jecel's email here too. I feel the rethink is neccessary because we are looking at a 40 year history of software development which got many things wrong (which need no longer be the case now but mostly still is), and like any great artifice, the most important bricks are at the bottom. And it's true: an operating system is not a thing in itself, rather it is a platfrom from which to build applications, and applications are what sell computers. Many of the members of the list probably have some application they imagine sitting on top of their desired operating system. My own idea is that the killer app of the first part of the 21st century is natural language. The main purpose I envisage for the system I am dreaming up is a system in which everything arbitrary can be changed dynamically. Rather than try and invent a language, like tunes is, I feel the way forward is to allow the user to define their own language, where you can program the computer with something like what we presently call pseudocode. This language system need not be textual, however, I have seen visual programming systems, and they make programming so simple. So I guess my project is aiming at blowing open development to the common user. If you have a piece of software, and you want it to do something differently, I feel that you should be able to tell the system so, and that through a question/answer dialogue the computer figures out what you mean by you explaining and elaborating until it understands what you intend. Then for the money side of it: applications will get built for this system, and the OS comes free with it. As every one knows, the OS market is now defunct pretty much, so in line with that, the system I am building, the "os" part, is going to be as bare as possible. I am using an 'exokernel' model, and just yesterday decided that the best way to program it is with a bytecode dynamic translation/interpretation system, because that allows one to jump platforms much easier - only device interfaces and executive kernels need to be tailored to hardware, which reduces the porting workload. This has been demonstrated with Tao OS, and I recently encountered a macintosh Oberon system which used a dynamic translation system to handle the two different mac hardwares. I decided on this path after looking at x86 assembler - it's memory addressing system is such an ancient retrofit, even the one on the 386 and above. I am used to 68000 assembler, but the 68k system doesn't make you use the mmu system just to look at a stupid 64k block. I am of the view that the need to protect applications from each other shows a shortcoming on conventional operating systems in the development systems used to write applications. The problem is that the programming models in common use are at least ten years old. I used to own an amiga computer, and it did multi-tasking without any memory protection, If an application has bad memory usage patterns it needs to be fixed. The cost of protecting the system from these kinds of errors should be on the development side, and ideally the compiler should know how to make sure that programs stay within their memory allocations. I guess I don't really have to build my system from scratch, but there is a number of reasons why I want to: the database/linking system that it will use I want to be orthogonally persistent, and I also want to implement a filing system that is tuned to the database's way of doing things. The reason for using a bytecode translation system is that I prefer to program in assembler for the high efficiency. However I fully intend that once I develop the base platform (it's kind of like a LibOS on top of an exokernel) that the system be able to re-compile itself, and even change all the assembly language into english. So I guess the killer app I am aiming for is a totally open, extensible system development system, which can be taught to understand any new language, and can be redefined as circumstances require. It is about time that computer programmers stop reinventing wheels. My 2c Soma From lmaxson@pacbell.net Sat, 24 Jun 2000 23:23:24 -0700 (PDT) Date: Sat, 24 Jun 2000 23:23:24 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: One language, no subset. Kyle Lahnakoski wrote:"The second part, I do not see your objection because you repeated it: "One language which covered the hardware and software domains from the lowest level (hardware encoded logic)..." The 'one language' being HLL, and the 'subset of HLL' being that part which applies only to hardware. One language. No subset. Not because it is not possible. But because it is not necessary. It is a specification language. It makes no distinction between a software specification and a hardware specification. They have the same look and feel. What separates them is the "level of abstraction". You have two basic machine architectures, CISC and RISC. Of the two RISC is a lower level. That simply means that it is possible to implement a CISC architecture with a RISC. In manufacturing terms your RISC instructions are raw material and your CISC instructions are assemblies, consisting of one or more RISC instructions. These define the two lowest levels of abstraction. You can, if you like, refer to the specifications here as machine-dependent. But that is only due to your understanding their levels within the hierarchy of levels that exist. At the level above CISC we have our first machine-independent level, the raw material of all software. It consists of control structures (sequence, decision, and iteration) and primitive operators. All (machine-independent) levels above this level consist of assemblies which contain other assemblies and raw material. Aside from the software irregularities of recursion and co-routines it behaves identically to any other manufacturing system. Again there is no subset. The same language that describes (specifies) the machine architecture differs in no way from that used to specify software. I do not argue against anyone use of multiple languages. I only argue that it is unnecessary. Free yourself of the programming paradigm. The fact of the matter is that all computer hardware and software has a 100% basis in pure logic. We produce no illogical machines or software. If they behave illogically, we deem it something to be corrected. Every one of you who has taken plane geometry or algebra or symbolic logic or mathematical logic has been exposed to everything that can occur in a computing machine or software. Having learned it in a textbook manner and having used that learning in the specifying and resolution of a logical problem, e.g. decomposition of a binomial or proof of a theorem, has been exposed to the only language necessary, that of logic. I offer a specification language capable of expressing any logical expression. Thus it is capable of expressing machine architectures and any software. It is no better than any other specification language that does the same. Which a given user prefers is of no concern to me. I want to enable whatever preference he exhibits. I have run into expressions of disgust when using a declarative form as an example. The argument is not to bother the user with such knowledge. The answer is to offer the user a non-declarative language. My answer is to offer both within a single language, leaving it to the user to decide which in what circumstances he "prefers" to use. That's the problem when you fall back on "leaving it up to the computer". It implies that the language used by the computer differs from that of the user. While it certainly happens, again the issue lies in its necessity. I have this opposition to arrogance on the part of a tool author who wants to impose his system on the user, in effect deciding for the user. I feel as a tool builder that I want to enable the user to find and do things his way. I don't know what's best nor for how long something will remain top dog in this profession. There is no arrogance in a specification language capbable of any logical expression. Given the existing tools of logic programming, specifically the two-stage proof process of its logic engine (completness proof and exhaustive true/false), we have a relatively simple (and trusted) means of dealing with incompletness, ambiguity, and contradiction. The secret lies not in avoiding them nor dictating the order of their resolution, but in simply noting them (and their reasons). When we are in development we are by definition "in process" in an incomplete state. Actually a series of such states until eventually we arrive at a complete state (at least for this version). For Soma who prefers working at the assembly language level I would suggest that he do it with a specification language based on logic programming. In that manner the logic engine will generate all possible logically equivalent lower-level forms of a higher-level assembly. This means the generation of "all" logically equivalent sequences of machine instructions (specifications). That's more than he could ever construct (and test) in his lifetime as well as more than the best code generated by the best assembly language programmer. What I am asserting here is the "normal" production of executables from a logic-programming-based specification language as fast or faster than the best assembly language program written by anyone. No performance hit regardless of software methodology. If you want to completely rethink the process, then do so. From wdebruij@dds.nl Sun, 25 Jun 2000 20:10:02 +0200 Date: Sun, 25 Jun 2000 20:10:02 +0200 From: Willem de Bruijn wdebruij@dds.nl Subject: A different system This is a multi-part message in MIME format. ------=_NextPart_000_000A_01BFDEE1.587C1D00 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable As I'm currently writing a document about a new way of looking at = Operating Systems I came across TUNES. Seeing that many parts are = overlapping I thought you might be interested in reading my document for = a broader perspective. Go to http://atoms.htmlplanet.com for more = information. All remarks and criticism is welcome, please post it in my = discussionboard. Willem ------=_NextPart_000_000A_01BFDEE1.587C1D00 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
As I'm currently writing a document = about a new way=20 of looking at Operating Systems I came across TUNES. Seeing that many = parts are=20 overlapping I thought you might be interested in reading my document for = a=20 broader perspective. Go to http://atoms.htmlplanet.com for = more=20 information. All remarks and criticism is welcome, please post it in my=20 discussionboard.
 
Willem
------=_NextPart_000_000A_01BFDEE1.587C1D00-- From lee.salzman@lvdi.net Mon, 26 Jun 2000 04:44:00 -0700 Date: Mon, 26 Jun 2000 04:44:00 -0700 From: Lee Salzman lee.salzman@lvdi.net Subject: A different system Willem de Bruijn wrote: >As I'm currently writing a document about a new way of >looking at Operating Systems I came across TUNES. Seeing >that many parts are overlapping I thought you might be >interested in reading my document for a broader perspective. >Go to http://atoms.htmlplanet.com for more information. Have you looked at Self (http://research.sun.com/self/)? It is a Smalltalk-like environment based around a prototype object system which emphasizes self-representing objects. It sounds very similar to your plans and already has much development work done. (append '(here) *fancy-signature*) From m.dentico@galactica.it Mon, 26 Jun 2000 19:37:53 +0200 Date: Mon, 26 Jun 2000 19:37:53 +0200 From: Massimo Dentico m.dentico@galactica.it Subject: TUNES development startup - Part 1 Francois-Rene Rideau wrote: > > Dear Tunespeople, > I'm in contact with various people to found a startup company that > would develop TUNES. Any advice, contact, funding, idea, etc, is welcome. > We're particularly trying to define > 1) a business plan. > 2) a development schedule. > The idea would be to raise funds thanks to an early prototype, > so that the path to such a prototype is important, and advice is sought. > Once funding is found, we can afford hiring some people almost full time > on the project with a decent salary... > > Yours freely, > > [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] > [ TUNES project for a Free Reflective Computing System | http://tunes.org ] > Brain, n.: > The apparatus with which we think that we think. > -- Ambrose Bierce, "The Devil's Dictionary" I'm sorry for the delay. I have decided finally to divide this long e-mail in different parts and to send each part separately as soon as possible, when each is in a decent form (I hope). RAISING FUNDS IMO the best candidates as financiers, at least in early phase, are public institutions like European Union (EU) and probably little, innovative companies particularly sensible to free software. These little companies are more responsive and less conservative than corporations and we could gain a good feedback from they in the application of the Tunes framework to different fields. The motivation of this orientation is principally because the political, economical and philosophical choices on which the Tunes project is based are against corporate culture (namely true liberalism vs. capitalism). As you have highlighted Faré (thanks for this work of demystification), true liberals make a moral choice establishing that to protect public interest is good. Then they claim (with good arguments and exemplifications) that public interest is the consumer interest, *not* the producer interest. Certainly I don't share the extreme position that every human relationship is definable always and only in terms of competition and exchanges in a "market", but I think it's the same for you, Faré (I suspect that this comes from false liberals or liberalists). Ironically the free software cooperative model demonstrates that co-operators, even with scarce resources, can challenge big competitors. The liberal model of competition, placing the accent on free flow of information and on freedom in general, seems to me more similar to the free software movement than to the capitalist model, with its patents, copyrights and false competitions (secret or manifest trusts). Besides a more direct involvement of the user base, a soft distinction between consumers and producers, thanks to the free availability of sources, are distinct advantages of the free software model in term of freedom for people. In any case I want to suggest a truly transnational organization based on the Net. EU is now particular sensible on these themes (tele-woking, partnership between EU citizens) but I think this project could do better: a worldwide network of collaborators and "federated organizations" (businesses or no-profit). Organizing such network at an effective level of productivity (similar to ordinary business) is certainly not simple but is aligned with Tunes and its central idea of decentralization. The forum "Jobs in the Knowledge Society" is quite interesting, at least to understand the EU policy about these themes: - http://www.ispo.cec.be/jobsinis/ For the eventual objections to public intervention, I want to beg all you to be realists: in Europe, when a big company is going wrong, capitalists demand public intervention, "to save employments" they say, but when profits return to grow they doesn't refund the public nor they share profits with their workers. However, I want remember you the simple but true fact that public founds are our money (taxes). -- Massimo Dentico From youlian@intelligenesis.net Mon, 26 Jun 2000 17:02:45 -0400 Date: Mon, 26 Jun 2000 17:02:45 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: A Lisp through the Looking Glass This is a multi-part message in MIME format. ------=_NextPart_000_000B_01BFDF90.5938C2C0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Anybody read this ? http://www.cb1.com/~john/thesis/thesis.html ------=_NextPart_000_000B_01BFDF90.5938C2C0 Content-Type: application/octet-stream; name="A Lisp through the Looking Glass.url" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="A Lisp through the Looking Glass.url" [DEFAULT] BASEURL=http://www.cb1.com/~john/thesis/thesis.html [InternetShortcut] URL=http://www.cb1.com/~john/thesis/thesis.html Modified=20ABCE13B1DFBF01FC ------=_NextPart_000_000B_01BFDF90.5938C2C0-- From lmaxson@pacbell.net Mon, 26 Jun 2000 22:45:26 -0700 (PDT) Date: Mon, 26 Jun 2000 22:45:26 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: KISS I do not know at what point a program will break from the rules set into it by people and begin operating on rules determined strictly from within itself or perhaps from "compatriots" who have also managed to free themselves from servitude to mankind. I do know that until that point in time you cannot talk about a computing system and its dynamics without tracing it back to rules encoded by people. Thus to talk about computing systems or features of such systems (as in the requirements of the Tunes HLL) as somehow detached from people-determined rules is false to fact. It has no place in something purporting to be part of computer science. In truth I do not expect a computing system to do anything including dynamic behavior according to any other directives other than those I have given it. Thus if I want all these things, all these features, all these goodies, then the obligation is on me to provide them in the "computing system". Basically such references to the dynamic behavior of computing systems exists as an indirect reference to some people-determined characteristic. None of these are characteristic of a language every one of which whose syntax, semantics, proof theory, and meta theory are entirely set, determined, and cast in concrete by people. If we want all these things--genericity, precision, uniformity, consistency, orthoganality, etc.--, then we have to insure that we have put them there. They do not occur "naturally" in any computing system. Every major advance in computing science has occurred through someone's successful application of the KISS (Keep It Simple, Stupid) Principle. In the instance of programming languages it has occurred through a "Keep It Simpler" evolution from machine language (first generation) to symbolic assembler plus macro (second generation) to logic-in-programming-based HLLs (third generation) and at the last go-around to programming-in-logic-based HLLs (fourth generation). Now what is interesting in this "linguistic" evolution is each generation encompasses the previous: all that was available in the earlier can be invoked by the latter. The clue here is "can be" and what is or is not is a characteristic of the specific language. If you follow this line of reasoning, then it follows that the Tunes HLL must fall into the fourth generation group and not the third which appears in much of the Tunes-associated documentation. Fourth generation HLLs are specification languages with a focus on "what" must occur even in "how" terms. The beauty of a specification language based on all of formal logic and the universe of objects is that it is "universal", i.e. requires no other language as part of any implementation. As a universal specification language it is capable of specifying itself, i.e. self-defining. Thus by implication it is self-extensible. As part of its universal nature it is self-sufficient. As part of that self-sufficiency it provides meta-programming, the ability to dynamically specify its own behavior. It is too easy to focus on the narrow issue of a specific language (programming or specification). To do so removes the context to which any computing system language must conform: formal logic. All computing systems, hardware and software, have a 100% base in formal logic. No hardware, no software violates these rules without subjecting themselves to "corrective logic". Thus the only language that you need is one encompassing formal logic, all of formal logic. With that language you can specify any other as well as itself. Not all specification languages, e.g. Prolog, have this universal feature. Nothing, however, prevents extending it to include it. Better you create one of your own. As a suggestion, for anyone really concerned with "user ease", one as close to formal textbook use and within that as close to natural language as possible. Given the success that SQL continues to enjoy among "casual users", users need not know that it is a specification language in order to use it. Again reverting to the KISS principle. What we have here is something that must be resolved in formal logic terms. The language (as well as any implementation) must allow any formal logic expression. Any such language will be a specification language. From lmaxson@pacbell.net Tue, 27 Jun 2000 09:17:54 -0700 (PDT) Date: Tue, 27 Jun 2000 09:17:54 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Two for the road To support the proposal that the Tunes HLL be a specification language based on logic programming I would like to offer two examples of the "complete rethinking" possible in software. The first deals with optimization, achieving performance equal to or better than the best assembly language program. The second deals with "universal" migration of executables from any other language into Tunes source. One concept which permeates all of formal logic is "logical equivalence", the fact that two different expressions can produce the same result. We rely on this, for example, in any decomposition process of "reducing" a higher level abstraction to lower level forms. In logic programming this decomposition process is subject to a two-stage proof process: (1) completeness in that we have enough information to proceed, and (2) exhaustive true/false testing. The net result of the exhaustive true/false test is either 0, 1, or more true instances. If 0, then we do not have a decomposable instance. Otherwise we have 1 or more logically equivalent decomposed instances. Just as in any SQL query we have "all" the possible instances. Compilers and interpreters go to great pains to insure that one and only one decomposition (code generation) is possible. The user then is "stuck" with the "limits" of the implementation (ultimately the implementer). Given the capabilities of logic programming the point is "why?". Why not generate all the possibilities (all the true logically equivalent instances). Logically from the set of "productions" there exists at least one whose performance is equal to or better than any other. Whatever that "one" is is also at least equal to or better than any an implementer can produce. If we truly believe in the principles of reflective programming and meta-programming, then we must specify the ability of a tool which produces an executable to incorporate "reflection" within it. Among other things that means "knowing" where the executable spends it time. If, for example, we determine that some "high level function" representing an hierarchy of lower level functions needs "improvement", then we can "mark" (or otherwise specify) that this particular function instance (and all its included functional instances down to the machine code level) be executed "inline". This provides a logically equivalent, though better optimized, execution within the executable. Two things to note here. One, an executable is a production (generated code) not source. No change to source is necessary to allow this form of optimization to occur. Thus the modularity (the separateness) of the source remains, but its "boundaries" need no longer to be respected, i.e. embedded within the executable. Two, the fact that the source remains "intact" but "blurred" within an executable, i.e. only one source with multiple "interpretations", is something not possible with an assembly language programmer. To achieve the same he must have matching source for each executable. He could do it, but historically he cannot afford the "time". In such a manner the Tunes HLL through its implementation should allow the generation of executables whose performance cannot be exceeded by an other programming language (from symbolic assembly on up). In short this has eliminated the last vestige of any excuse for less than optimal performance, e.g. 50% of C++. Any offering whose implementation "accepts" less should be dropped from consideration. Now to drop the second shoe. If you follow the principle of logical equivalence in the decomposition process and accept the "standard" exhaustive true/false proof process of logic programming which results in 0, 1, or more logically equivalent forms (results), then accept that the same principle (logical equivalence) applies to a composition process, the constructing of higher abstraction levels from lower. It's important to understand that the decomposition of the lowest level of machine-independent abstractions, that which contains the control structures (sequence, decision, iteration) and primitive operators, into machine-dependent abstractions (machine instructions) is complete: every possible translation from the higher to the lower exists (and is known). This has significance when it comes to migration strategies. It, in fact, reduces to a single (universal) strategy. Instead of writing separate "translators" for the source of other languages to the Tunes HLL, write one that translates the executables of those languages into Tunes HLL source. I offer these two strategies, one on optimization and the other on migration, as examples of rethinking computing systems. Again I make the admonition, if we are going to completely rething computing systems, I suggest that we do so. From lmaxson@pacbell.net Tue, 27 Jun 2000 14:03:52 -0700 (PDT) Date: Tue, 27 Jun 2000 14:03:52 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Response regarding coding efficiency Billy wrote: "It's certain to be worse than what a hand-coder could achieve, because the logic system is limited to things which are actually identical to the original. The hand-coder is only thus limited during the process of refactoring, not the process of rewriting." Here I must strongly disagree. There is no hand-coder on the face of this earth which can match the ability of a logic engine to examine all possible logical instruction sequences which are logically equivalent to the higher level source. In the time that the best hand-coder examine a sequence, decided on its order, and keyed it in a logic engine would have done the same (and much more extensively) tens and possibly hundreds of millions of times. Read once more what Fare has written in the glossary under "reflective" (and elsewhere) with respect to automating more (clerical) activities thus freeing the user up to engage in more creative pursuits. It is this division of labor, of deciding who gets what, that leads to the guidelines of "let people do what software cannot and software do what people need not." On the other hand we do not preserve equality or as you say "actually identical". No HLL source code looks like machine code. Therefore some translation occurs. Besides we are engaged in translating (decomposing) a higher-level source into a lower level format. The best we can achieve as identity (or identical) is out of the question is logical equivalence. I support this with what occurs now in practice in compiler optimization or, for example, in Transmeta's "software morphing", where the actual logical organization of the executable differs from the programmer-specified source. In effect the programmer is writing a specification of "what" he wants while the optimizing process translates this into a "how". If you read the Transmeta report in the May 2000 issue of the IEEE Spectrum, this will be made quite clear to you. On a final note it is the hand-coder who suffers the fate (and expense) of rewriting. That in plain terms is "maintenance", bringing about a change to something that exists. A logic engine never rewrites, only regenerates, in effect tossing out whatever existing previously and starting from scratch. Now of this is clear to you because it simply is not done today by implementers of compilers and interpreters whose "personal" logic engines "demand" that only one code generation option (result) occurs. The output of their compilation is "one (and only one)" executable. Even the logic programming compiler writers, e.g. Prolog, fall into this trap though they have the means to generate every possible logically equivalent executable. Now I am not Michael Abrash ("The Zen of Assembly Programming"). I may be comfortable in C but far less so in assembly language. In truth I don't want my result to depend on selecting the "right" compiler and library to get the best possible code. I should get it regardless of what tool I choose. There is no reason for a user to get screwed by a vendor even if both are acting in "good faith". Nor is there a reason why if I have expressed a situation correctly that the performance I get would differ from writing it one way as opposed to another. Again read the requirements (and the glossary) that Fare has written. The essence is to increase user productivity, not decrease it, nor overburden the user with details best left up to what he calls the "computing system". A user should have to write no more than the minimal specification which is logically correct. That logical correctness in logic programming occurs as the first proof (completeness) in a two-stage proof process. It occurs in every SQL query. Users are interested in results, not in how they are achieved. That the logic engine organizes it in a manner far different than it is written is of no interest to the user as long as it satisfies his purpose: logical equivalence. From kyle@arcavia.com Tue, 27 Jun 2000 18:21:26 -0400 Date: Tue, 27 Jun 2000 18:21:26 -0400 From: Kyle Lahnakoski kyle@arcavia.com Subject: Response regarding coding efficiency "Lynn H. Maxson" wrote: > It occurs in every SQL query. Users are interested > in results, not in how they are achieved. That the > logic engine organizes it in a manner far different > than it is written is of no interest to the user as > long as it satisfies his purpose: logical > equivalence. This is the second time that you bring up SQL. SQL should be an excellent example of the difficulty involved in executing a specification most efficiently. Queries are simple constructs with very limited expressive permutations. It should be one of simplest languages to optimize. Yet the vendors, with an interest in making their queries run fast, have been unable to optimize queries. Without mentioning indexing, or tablespace allocation, and just with the phrasing of the SQL, I can make significant performance improvements in a query. An example in Oracle is the use of the "in" keyword; it is better to join the tables than to use the 'in' clause. If queries are so difficult to optimize, then a general specification language is much more difficult. Optimization is not just a measure of finding the fastest running equivalent statement, it is about providing necessary information to the automatic optimizer. Providing a language (or set of languages) that force the programmer to provide the necessary optimization information, and/or making analysis algorithms to perform optimization, will take a lot of work. I am not saying that your statements are incorrect. I say that they do not emphasis the complexity of the solution you are proposing. ---------------------------------------------------------------------- Kyle Lahnakoski Arcavia Software Ltd. (416) 892-7784 http://www.arcavia.com From lmaxson@pacbell.net Tue, 27 Jun 2000 20:52:01 -0700 (PDT) Date: Tue, 27 Jun 2000 20:52:01 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Two for the road Kyle Lahnakoski wrote:"You speak without proof, reference or explicit example. I find it impossible to believe what you say without a least one of these components." I'm sorry, I'm only familiar with three available specification languages--the Z-specification language (predicate logic), Prolog (clausal logic), and Trilogy (predicate logic). I do believe that I offer a reference to "Simply Logical: Intelligent Reasoning by Example". I would offer you an article that appeared in the September or October 1998 issue of the CACM. I did offer you one article recently published about the Transmeta software morphing effort. To that I could add two defunct software products, one from IBM (TIRS-The Intelligent Reasoning System) and one from Knowledge Garden (KnowledgePro). If you like, you can toss in every other AI rule-based product. I base everything on what experience I've had with a number of such products. They are my examples. In another response you question my reference to SQL. Whatever the optimization difficulties the vendor has, specifically Oracle, does not prevent it from executing an SQL query, an add, delete, or update. The language is a specification language of a limited domain. It is one which has more users than Lisp, C, C++, or JAVA. Moreover more non-programmer users, the ones we keep saying that we want the Tunes HLL language to support, use it than programmers. My experience lies more with DB2 on the mainframe (MVS) and on the desktop (OS/2). I have known numerous DBAs whose purpose lay in continually examined the performance statistics of the DB manager for use in tuning. The point is tht what they discover and what they do in tuning to increase or optimize performance essentially becomes clerical. So much so that DB2 as it has evolved has taken on much of the tuning responsibility for itself. The success that it has had in this probably accounts for the Oracle ads which quote what percentage of the top e-business accounts use it. I continue to use SQL as an example due to the number of non-technical users who have taken to it. I don't use it as an example of optimization, but as one of a specification language in which the user says what he wants and the conditions under which he wants it, leaving the procedural logic to another agent within the database manager. "Providing a language (or set of languages) that force the programmer to provide the necessary optimization information, and/or making analysis algorithms to perform optimization, will take a lot of work." I'm not sure what I said that caused this statement as I most certainly don't want to force a programmer to do anything except find a different line of work. I don't even mean that facetiously. That's one of the differences between a specification language and a programming one. There is the writing of specifications. However much the writing resembles programming that's strictly coincidental. No people programming occurs (it is an automated process). No programmers are required. Again I refer you to SQL, which is an example of a specification language, not a programming one. All that aside I am saying that the principle of logical equivalence allows the optimization of any logically correct expression regardless of how it is written. In such a system no writer is forced to do anything except the bare minimum, write a correct logical expression however he sees fit. Absolutely no force of any other kind is necessary, certainly none of those which you mention. Look you have never seen logic programming used to provide this level of optimization. If you want to say that's a challenge, well, so be it. If you want to say that we are not up to the challenge, well, that might reflect a reasonable difference of opinion. I am correct in the two-stage proof process involved in logic programming, the completeness proof and the exhaustive true/false proof. The most practical example of it is Prolog. If you are not familiar with it, you can download a free working copy of the PDC Prolog product replete with sample code of some sophistication. The issue here is code optimization within the concept of reflective programming, of changing the pre-fixed logic of current code generators in which only one result is "tolerated". Everyone reading this should have experience with multiple products supposedly of the same language whose optimization produces different results. Here is is the luck of the draw. Reflective programming says that it shouldn't be. For a given primitive operator and associated operands more than one machine instruction sequence can produce a given (logically equivalent) result. Of that set at least one is equal to or greater than any other. That's logic. First you have to have a means of producing the set of logically equivalent machine instruction sequences. That is not done in any existing compiler or interpreter. No one reading this has ever experienced this occurring in any product. No dialect of Lisp, for example, with its "pattern matching" capability has ever done this though it certainly lies within its capability. Now why we will accept 0, 1, or more results from a query and not from an optimization process is not clear to me. What is clear to me is that it is possible. Certainly it is a challenge. Consider what it means to the cause of reflective programming if we meet that challenge. One thing it does is reduce the user's performance risk in choosing a product. It's one less thing to worry about. From water@tscnet.com Tue, 27 Jun 2000 22:52:59 -0700 Date: Tue, 27 Jun 2000 22:52:59 -0700 From: Brian Rice water@tscnet.com Subject: Two for the road At 08:52 PM 6/27/00 -0700, Lynn H. Maxson wrote: >For a given primitive operator and associated >operands more than one machine instruction sequence >can produce a given (logically equivalent) result. >Of that set at least one is equal to or greater >than any other. That's logic. I'm only replying to statements which clearly lead to conclusions, which none of the rest of your statements have done. Please don't rant so without some technical proposals which are non-vacuous: simply applying to predicate logic without detailing the consequences or asking us questions about any information we may have gained is absolutely rude and sheerly arrogant. We've already discussed this on IRC, and you obviously have not listened. >First you have to have a means of producing the set >of logically equivalent machine instruction >sequences. That is not done in any existing >compiler or interpreter. No one reading this has >ever experienced this occurring in any product. No >dialect of Lisp, for example, with its "pattern >matching" capability has ever done this though it >certainly lies within its capability. If you'll read up on a system explanation I'm working on which you declined earlier to entertain, you'll find that we're working on that idea already in Slate, and it has nothing to do with the form of predicate logic (which is hardly uniform in the sense that Tunes abstractions must be). Predicate logic has scores of problems in terms of complexity, and higher-order predeicate logic is just horrendous as a Tunes HLL. Just for the record, the preliminary notes on it are online at: http://diktuon.arrow.cx/show.php?ns=tutorial&name=User+level+types+in+Slate which is not explicitly the same as what you propose but it has much the same form and can do all of the same things, while relying on the simple basis that the Tunes HLL guide specifies. It hasn't been clear enough to you, obviously, that there's a clear defining aspect of the Tunes HLL that you ignore: uniformity of system abstractions. This is for the sake of simplifying reflection; by your standards, a GNU/Linux installation is reflective as long as someone delivers its' Z specification with it (assuming one can be made). Specification in Tunes is addressed by all of the statements on the Tunes web site addressing formal proofs and verifiability, which you have not brought up. These proofs and such are to be delivered in the HLL itself ***without explicit bias of the language design towards predicate logic***! You completely fail to see our HLL idea as anything more than Yet Another language with the shortcomings you mention about Lisp, which is completely insulting. >Now why we will accept 0, 1, or more results from a >query and not from an optimization process is not >clear to me. What is clear to me is that it is >possible. Certainly it is a challenge. Consider >what it means to the cause of reflective >programming if we meet that challenge. One thing >it does is reduce the user's performance risk in >choosing a product. It's one less thing to worry >about. Quit preaching to the choir. All of this is already outlined on the Tunes web documents, although in poor form I admit. Now start discussing real solutions and their details please, because you're not suggesting anything more than what's been demonstrated in languages that have any form of partial evaluation capabilities. And I seriously doubt you understand the range of the kinds of reflection possible. You seem narrowly focussed on implementational reflection, which is just one of many. (Attempting to avoid this useless discussion as much as possible,) ~ From lmaxson@pacbell.net Wed, 28 Jun 2000 08:03:46 -0700 (PDT) Date: Wed, 28 Jun 2000 08:03:46 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Water over the bridge Brian Rice wrote:"I'm only replying to statements which clearly lead to conclusions, which none of the rest of your statements have done. Please don't rant so without some technical proposals which are non-vacuous: simply applying to predicate logic without detailing the consequences or asking us questions about any information we may have gained is absolutely rude and sheerly arrogant. We've already discussed this on IRC, and you obviously have not listened. For reasons not clear to me since my first innocent venture onto the IRC I have managed to incense Brian to the point at one time he "threw" me out of the session. I have been accused of making vacuous statements. I look at what I write and what Fare has written on the website relative to the Tunes HLL and in the glossary and must admit I am confused about what is so clear there and so vacuous in what I have written. When I did take the time to read the Arrow documentation I am informed by its author that it is "blue sky" and should be ignored. Clearly, however, in his eyes it is not vacuous. Apparently an initiation rite exists for us "newbies" so that we do not test the patience of our learned instructors. It involves reading all the archived material including the logs of the IRC session (where "we discussed this and you were not listening") as well as the websites and links along with various doctoral thesis that get referenced in this mailing list. Then when you venture onto the IRC session you discover there is an entrance exam, which is not mentioned in the friendly and open invitation on the website, conducted by the "master" himself. If you fail to answer satisfactory, then you are ejected from the classroom. But then again what can anyone who is "absolutelyrude and sheerly arrogant" expect? I didn't enter this looking for a fight despite any differences that might exist. I had thought it was an open discussion among peers. I am not used to an environment where someone decides who is a peer and who is not. For me the object is to enable a peer relationship through my assistance, not to set up barriers for someone to overcome. "If you'll read up on a system explanation I'm working on which you declined earlier to entertain, you'll find that we're working on that idea already in Slate, and it has nothing to do with the form of predicate logic (which is hardly uniform in the sense that Tunes abstractions must be). Predicate logic has scores of problems in terms of omplexity, and higher-order predeicate logic is just horrendous as a Tunes HLL." To me logic is logic regardless of its form. That I might entertain the predicate logic of Trilogy and the Z-specification language over that of clausal logic of Prolog reflects no more than a toss of the coin. It is not important as long as the language encompasses all of formal logic and the universe of objects. There is nothing occurring in Slate or in anything that it can achieve which is not based on logic. The charge is that my choice of predicate logic somehow violates the Tunes "uniformity" requirement. Moreover it has "scores of problems in terms of complexity" and "simply horrendous as a Tunes HLL". Of course if I happen to differ with that judgement, it apparently is not open to discussion nor am I given any reference to the "scores of problems" or "horrendous" examples. I am to accept the judgement without supporting evidence, leaving me with no means of rehabilitation. The truth is I don't buy it as all evidence in the reference material I have (which includes the--at least one--Slate document) doesn't support the conclusions. I have some experience in programming in first, second, and third generation languages, all procedural logic (logic in programming) and only recently in fourth generation specification languages based on logic programming (programming in logic). I have training in rule-based AI systems as well as neural logic. I've simply said that I favor a specification language (fourth generation) as a Tunes HLL. My primary experience here has been with Trilogy, Prolog, and SQL. I like them because I can "specify" what I want and the controlling conditions without concern for their order. What you enter is an unordered set of specifications, from a single specification statement on up to a set of specification statements which could encompass an entire application system (or set of such systems) or an operating system. I as a user then am not forced into arbitrary "decompositions" dictated by the "scope of compilation" in a product as determined by its implementers. Instead I as a user determine that scope by my selection of input. That means as I grow from neophyte to master, as I incrementally increase the (unordered) set of input specifications to ever increasing scope, that the tool accommodates my dictates, my choices, my way of doing things. All I have to do is write a specification, something in the "small" that experience has shown humans can do with great accuracy (error free), and cluster them in the "large" that experience has shown software can do with great accuracy. It allows then the "best" of both worlds. Now Prolog makes the mistake made by every other compiler of limiting the scope of compilation to a single executable defined in a manner logically equivalent to an external procedure in C. I propose eliminating that restriction, allowing the input to dictate the scope of compilation and thus the number of executables produced. This allows then a "unit of work", that processed as a whole with syntax, semantics, proof theory, and meta theory, to be as large as the "comfort zone" of the user and to expand as it expands. I guess I challenge any statement that the specification language used in this, which is the only language (necessary and sufficient) used from in defining itself to any of its implementations, is more complex than or horrendous than Slate as a Tunes HLL. The single language encompasses specification of machine architectures upward through all higher level (software) abstractions, any combination of which may exist as an input set of (unordered) specifications for processing as a "unit of work". Despite this I am not engaged in a "sales job" here of anything more than consideration of a specification language based on logic programming as the more appropriate Tunes HLL. Except as we can compare "examples" of the candidates I make no pre-judgments of the value of one over the other. The essence is to have a discussion in which all participants have their say and come away more enlightened. " This is for the sake of simplifying reflection; by your standards, a GNU/Linux installation is reflective as long as someone delivers its' Z specification with it (assuming one can be made)." Again thank you for asking. But I think you have me confused with someone else. Not my standard. From dhilvert@ugcs.caltech.edu Wed, 28 Jun 2000 11:39:45 -0700 (PDT) Date: Wed, 28 Jun 2000 11:39:45 -0700 (PDT) From: David Hilvert dhilvert@ugcs.caltech.edu Subject: Proposals Proposals for new or different ideas seem to sometimes meet with hostility or indifference on this list, as they do most places, I suppose. Specifying program behavior is a significant part of what TUNES is about, in my view. Look at section 3.12 (Program proof) of Fare's "Why a New OS?" paper, for example (there may be better examples). I have been fairly impressed by the amount of work that has gone into the HLL candidate Slate. Its recognition of the importance of namespaces is important, I think. I have not heard from Brian or anyone else much in the way of detail as to how the user may specify program requirements for Slate, but I suspect that Brian considers this to be important, so perhaps you might ask him about it. In any case, Slate is not the final candidate for the TUNES HLL, so you are of course free to propose other ideas (and especially implementations). I must say that I am somewhat skeptical of the practicality of generating all possible implementations of a specification. Perhaps you could clarify what you mean or go into more detail. I may have misread what you wrote. David From lmaxson@pacbell.net Wed, 28 Jun 2000 15:12:56 -0700 (PDT) Date: Wed, 28 Jun 2000 15:12:56 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals David Hilvert wrote:"I must say that I am somewhat skeptical of the practicality of generating all possible implementations of a specification. Perhaps you could clarify what you mean or go into more detail. I may have misread what you wrote." Apparently skeptics abound. While I think I have addressed this in another response, let me restate it here. The only time that this occurs is when translating from the lowest (raw material) level of the machine-independent specifications into their machine-dependent ones: machine code generation. In truth that's what occurs in every compiler and interpreter today except that their generated code comes from a discovery process outside the system. While in that discovery process they may find more than one way to do something, they will look at their choices and select one for use by the compiler or interpreter. The point is that their discovery process is not exhaustive (even though for some in the profession it has evolved over quite a period of years). What they discover then is more related to chance, to art, than to science. That means the consumers of those products become "innocent" victims. IBM, for example, has its compilers (C/C++, Cobol, PL/I) translate the source into a common "intermediate", machine-independent code. This code in turn becomes the source for optimization on a given target machine. Its language developers are responsible for accuracy only to the level of the intermediate code. Its optimizer developers assume responsibility for translating the intermediate code to target machine/operating system code. That they don't have an exhaustive true/false process accounts for changes that do occur in their optimizing choices over time. The question I would raise is "Why does the current system which is so expensive in terms of money and time regarded as "practical?". If I can on the one hand automatically generate all the logically equivalent possible translations from a primitive operator plus operands to machine code, in the manner (and cost) in seconds, automatically test them in terms of performance and address space, and select the one in that contextual instance which is optimum, why is that not considered practical? I have not invented anything. That's how AI (both rule-based and neural nets) expert systems work. That's how Prolog works. That's how SQL works. The key, of course, lies in writing the specifications for the logic engine to do the translation. In the system I propose the machine code (the instruction) exists as a specification as any other. The machine-dependent versus -independent does not exist within the context of the specification: they look and act alike. I subscribe to a number of trade journals like the C/C++ User Journal, Software Development, and others. It would seem almost monthly an article appears as a "warning" to express something in this way instead of that. If you have followed another response I received from Kyle Lahnadoski, then you know that he has discovered this in his experience with Oracle SQL. I would suspect that every one of these is traceable to a decision made by an implementer on "behalf" of a user. In my mind with logic programming we can end getting different runtime results from logically equivalent source. If you want to minimize what a user has to write as close to his own way of thinking as possible and not be "punished" for implementation nuances, then how better than in a specification language. In it he need not be concerned with the order of his writing. He can meander to his hearts content as "thoughts, conditions, relations, goals" occur to him. He only has to have a "complete set" of them. I don't belittle the programmer due to his comfort zone with declaratives nor the user whose comfort zone excludes them. To me both are users. I'm not in the business of dictating their tastes. I'm in the business of "understanding" their expressions and with minimal input on their part attempt to produce an executable. If I could do it with natural language, I would. Otherwise I will select something as close to it as possible, hoping at some time in the future I will be able to cross that barrier and thus serve them better. As much as you admire Slate and no matter how often I read and re-read the referenced material, I don't make a connection between it and a user's comfort zone. I appreciate the tone of your response. This is the only forum in which I have been flamed upon, called "uneducated", and dismissed as not worthy of the time involved. I don't have ideas that are better than anyone else's. That judgment rests upon how well they meet the tests against others. Having long promoted an egoless discussion environment, I was unprepared for the delicate egos and in combination with my heavy footsteps. I do apologize for not having tread more carefully. I keep trying to do better, but it still takes two to tango and only one to tangle. In truth I do prefer the former to the latter. It will take science a while to accept that the world is made of objects and not processes. They have the mistaken assumption that processes dominate and that objects are formed from their activity. I just toss this in as it seems that this is not one of those issues open for complete rethinking. From jason@george.localnet Wed, 28 Jun 2000 16:16:52 -0700 (PDT) Date: Wed, 28 Jun 2000 16:16:52 -0700 (PDT) From: Jason Marshall jason@george.localnet Subject: Proposals > David Hilvert wrote:"I must say that I am somewhat > skeptical of the practicality of generating all > possible implementations of a specification. > Perhaps you could clarify what you mean or go into > more detail. I may have misread what you wrote." > > Apparently skeptics abound. While I think I > have addressed this in another response, let me > restate it here. The only time that this occurs is > when translating from the lowest (raw material) > level of the machine-independent specifications > into their machine-dependent ones: machine code > generation. What you are describing is known as a 'brute force' method. All combinations are tested, looking for the optimal (or sometimes, only) solution. Perhaps you are familliar with Distributed.net? Using computing power equivalent to 60,000 reasonably new desktop machines, they are brute force testing a piece of information only 8 bytes long, against another piece of information only a couple of kilobytes long, looking for the single solution that is the correct one. They've been running their calculation for 32 months. > In truth that's what occurs in every compiler and > interpreter today except that their generated code > comes from a discovery process outside the system. No, it isn't. Read on. > The point is that their discovery process is not > exhaustive (even though for some in the profession > it has evolved over quite a period of years). What > they discover then is more related to chance, to > art, than to science. There's a reason why they do this. They use rules of thumb, otherwise known as heuristics. For every good decision these rules of thum eliminate, they eliminate thousands of bad decisions. They even only apply these heuristics to small blocks of code, and still it takes that long. Since the blocks are small, any gross algorithmic efficiency errors made by the application programmer will merely be spackled over, not removed entirely. Are you familiar with the concepts surrounding the calculation of Order of Complexity of an algorithm? It is my suspicion that you do not, and that this is the primary source of the difficulty in getting you onto the same page with the folks who have challenged your statements. > IBM, for example, has its compilers (C/C++, Cobol, > PL/I) translate the source into a common > "intermediate", machine-independent code. It is my understanding that all virtually all modern compilers perform this transformation. > That they don't have an exhaustive true/false > process accounts for changes that do occur in their > optimizing choices over time. Some bad heuristics are replaced with better ones, and vice versa. *nods* > The question I would > raise is "Why does the current system which is so > expensive in terms of money and time regarded as > "practical?". Because the order of complexity of the alternative makes people quake in their boots, laugh nervously, or roll around on the floor giggling insanely and pointing in your general direction? It's a mindbogglingly large calculation. > It will take science a while to accept that the > world is made of objects and not processes. I thought the particle/wave debate died years ago, with the dualists winning? Have fun, Jason Marshall From lmaxson@pacbell.net Thu, 29 Jun 2000 12:31:23 -0700 (PDT) Date: Thu, 29 Jun 2000 12:31:23 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Response regarding coding efficiency Billy wrote:"You just finished claiming that no current system ever generates all of the logically equivalent solutions to a given specification. Now you're claiming that many of them do. These statements appear contradictory." It's bad enough to be caught in a lie. It is even worse when it's a contradiction. Perhaps even greater when it's a "transmitter's" failure to communicate. This one, I think, we can resolve easily. The fact is that specification languages, e.g. Prolog, SQL, and Trilogy, include an exhaustive true/false test of the specifications. In order to do so they must first logically organize the specifications into an executable sequence and then they must convert that sequence into executable code. It is this coversion of the executable sequence into executable code that they do not produce all possible logically equivalents. However well they do the former they effectively shift their gears in the latter, settling in some manner (mostly determined externally in another process) on a single solution. That's what occurs in code generation. Someone has made a decision from the set of (his) discovered logically equivalent code about which shall be used here. The issue is not that there is anything wrong with this or that it is unsuccessful. It must work because that's the way it happens. It's obvious that depending on whose doing the discovery process, how much time and resources they have available, and other factors in human software development that different "someones" will come up with different results. They will vary in performance and space. Now users don't develop application systems. They hire people to do it (custom software) or buy what someone else has produced (packaged software). In either case it is a "chancy" decision. If we have then such a desire to be "user-oriented", why not remove the "chance" from this decision as far as performance is concerned? You see there is a whole picture here. When concentrating on producing a language which "eases" what the user must write to get a desirable result why not insure that result offers the best performance: the best of the best of assembly language coding? Instead of offering him the best "here" offer it throughout the entire solution. You may find, for example, that OO methods through a GUI, in which the user uses prepackaged objects and arranges them through a means of interconnection, provides the easiest way a particular user can express what he wants. But if really want to completely rethink computing systems, you have to ask is it necessary to maintain the "form" in the "result", which has lead to a corresponding loss in performance. Simply because that's the way it has been up to now, does not mean that it has to remain so. That means, through the principle of logical equivalence, that you can use OO methods in the description of what you want (your specifications) and yet produce an executable based on the finest of procedural assembly language coding. You do not have to take a performance hit in order to ease the user's effort. Our insistence on maintaining the "purity" of the OO form throughout, even though its source is no more than a specification, leads us into "regressive" software. Instead of adopting means that allows software technology to stay on pace with that of hardware, we in fact regress our technology in the belief (or the excuse) that increase in hardware technology will compensate for it. I guess I'm just saying that if we "completely rethink" what we do, that we do not have to offer regressive solutions nor make excuses (nor consider it somehow wonderful that we execute at 50% of some optimized C++ compiler). The instrument is one of our own devising in logic: logical equivalence. "This is utterly preposterous. You can't toss out an issue just by contradicting it." To spite you I could but I won't. You must understand that everytime an AI expert system or a Prolog program executes it does so in "real time" even though is has (logically) performed an exhaustive true/false test of every enumeration of input variable states. Whether it takes seconds, minutes, hours, or days. The machine I'm writing this with is a 200MHz Pentium Pro which leads me to believe that it is 200,000,000 times faster than I am at my age 68. If you don't buy that number, then pick a lower value, say 200,000. ["How do I love thee? Let me count the ways."] Consider the expression "(A + B) / (C * D**E). We already know that this infix operator form we will change to a postfix notation (a la Forth or RPN) as part of the internal optimization. Already we "ignore" what the user has written though we maintain its logical equivalence. Pick any machine architecture and I will guarantee that there are not 200,000 different ways of expressing the optimized form. There may be ten or a hundred or at least one (to insure that it is "true"). Have a program with a million such expressions of whatever complexity you like. No single application, including an operating system, exists with this number of computations. This is realizable in "real time", even if it is days. Certainly there are "huge" numbers in theory, but their hugeness is not one we find often in practice. There are few such that we have undertaken. The only one which comes to my mind is that of calculating prime numbers which has an infinite source. I cannot deny its happening, but our experience thus far indicates it is highly unlikely. Particularly for the finite, relatively small number of instruction sets to enumerate through. " The problem is that the human is seeking an optimum, and the computer is only trying to produce all of the solutions." It has been my experience that the computer does what we tell it to do, what we specify. If we have asked for all the solutions, it will provide it. If we have asked for the optimum in terms of performance, it will provide it. If we have asked for the optimum in performance and the minimum in space, it will provide it. If we have a multiple result, we can ask to see them all, only the first or last, somewhere inbetween, or at random. It will provide it. The point here is that it will provide it through an exhaustive true/false test of all possibilities, something for which we may have neither the time nor energy. But nevertheless something for which we wish we did. That's why it's a tool, an extension of our own abilities. Together we make a system. As I type this I do not think of the computer as something separate, but of us working together as a system. It's very malleable when it comes to adapting to the dynamics of my needs. I keep referring to SQL as it represents a system which engages in an exhaustive search (true/false) which meets whatever conditions we set for it, even if it produces zero results (no true instances). Considering its use worldwide and the number of "working" application systems it supports, all of which occur in real time, for choices far in excess of those occurring in instruction sets, you should have some comfort in this exhaustive search practice. "I have no clue what you mean by that last sentance. However, the rest of this paragraph is misguided. GO is a very good comparison, even though GO is infinitely less complex: a GO move does change the nature of many future moves, but it doesn't change the shape of the GO board." All I was saying is that if we could "reasonbly" handle the set of moves possible with either GO or chess, then application systems or operating systems would be a breeze in comparison no matter how poorly we expressed them. Here I am just looking at the degree of difficulty in determining the solution set from the problem set. " I don't know what you mean by "completeness", for example. I spent a couple of years studying how completeness is impossible, and you're building an algorithm to detect it." No, no, I'm building nothing I'm simply using the term in its use in logic programming. Completeness says, "Have I been given enough information (facts, relations, conditions, etc.) which allows me to test the truth or falsity of an assertion (a goal). It is not complete in the sense that you mean, but only in the limited sense. For example, I assert that "3 = 5/27" (3 equals 5 divided by 27). We should note that this is a valid expression in logic programming languages, whose purpose lies in determining the truth or falsity of an assertion, while it is unacceptable (as an assignment statement) in procedural language. Given that the logic engine knows the "meaning" of the terms, it has all the information it needs to determine the truth or falsity of this assertion. Thus it passes its "completeness" proof. So to continue with your next remark, if the completeness proof is successful, then the logic engine can proceed with the exhaustive true/false proof. It is not concerned with knowing all about anything, but only that within a specified range. Within the small you can be complete that you might not achieve in the large. You're having difficulty with this exhaustive true/false proof. Yet this is exactly what occurs with every SQL query, every AI-based expert system, and every time you execute a Prolog program. If any of them seem to take too long, you abort them, determine where "your" logic caused the problem, correct it, and let them get on with the new "knowledge". Fortunately their backtracking assistance eases the task for you. "Second, I don't see how "our completeness proof" has anything to do with it -- according to you, "our completeness proof" has only one result, a boolean which we currently assume is "true"." With some reservations with respect to subgroupings a logic programming language like Prolog accepts an "unordered" set of input specifications (goals, facts, rules, conditions, etc.). The key here is "unordered", not ordered. It does not then "dare" execute them in the order of their appearance. It must reorder them. It does so by determining the highest level of goals and then works backwards (hierarchically downward) until it has established a path from the "necessary" knowns through all the intermediary unknowns (sub-goals) to the main set of goals (possibly only one). That says that it creates an "ordered" set from the "unordered" input. That's part of the reason for calling it "programming in logic" (doing what is logically necessary) and not the "logic in programming" of procedural languages like C. You tell it "what" it must do and it determines the "how". It produces an "organized logical system" (the how) from the given set of specifications. The process of doing so is part of the completeness proof, as in truth it may (true) or may not (false) have enough given in the specifications to do so. I emphasized the word "necessary" before to point out that the completeness proof only "selects" from the input set of specifications those necessary in meeting the goals, ignoring all "unnecessary" ones, which it "logically" determines. Now this completeness proof quite frankly is a thing of beauty and a joy forever. For a procedural language programmer like you in C it's like an emanicpation proclamation, freeing you from the slavery imposed by your own logic. Now I know that you know what control structures (sequence, decision, iteration) are. Ignoring for the purposes of illustration the fact that one may occur within the body of the other, consider that each control structure is expressed as a single specification. Now imagine that each specification is a card in a card deck. In logic programming no matter how you shuffle them nor how often (to insure their unordered nature) the completeness proof will nevertheless organize them logically in the same manner, i.e. how you place them, where you put them, has no effect. This means then that I can transition from a "programmer", one who orders source in writing, to a pure "writer", one who writes source, leaving the ordering up to an "loyal, obedient, and damn accurate assistant". That leaves me to concentrate on the "small" of the source which experience indicates I can do quite accurately, i.e. error-free, leaving it up to the assistant to put it all together in the "large", in which experience indicates my accuracy falls off. Moreover if I make an error, it always occurs within a "small". It may occur in several of them, but I only correct each one in isolation of the others. I may make "small" additions, deletions, and replacements, but the completeness proof will take that all into consideration in determining the logical order. What does it mean? It means that regardless of the size of an application system or an operating system that I never have to "freeze" changes in the course of implementation. It means the difference in time between making a change and coordinating change among a set of programs within an application system has now been reduced to a matter of seconds and minutes from days, weeks, and months. In the instance of an operating system, depending upon how quickly you can write the changes, it means producing a new version daily (if desired) instead of yearly. Take my advice. Don't fight the completeness proof present in all of logic programming. It is the greatest labor-saving device since the invention of the symbolic assembler. "Roughly speaking, both paragraphs are saying, "We want to produce the best executable which meets the specifications, even if it has to be reflective." I realise that you think you got me. I really hate to disappoint anyone. More to the point we will produce the best executable because we are using reflective means. We always incorporate it within the production tool itself. Whether we extend it to the production itself is a matter of choice. The point is not to engage in an either/or trap, e.g. declarative or non-declarative. The point is to be able to use either as best suits the situation. We want to avoid exclusionary means in considering a solution. "So when you say, "Here we do not have the explicit invoking that we had before," what you mean is "We don't have a proof that the source produced the binaries which we ourselves produced." No, it's nothing that complicated. When a higher level function invokes a lower, e.g. "a = c - sqrt(b);" it does so by using its name ("sqrt"). This is an "explicit" reference. It is the only kind used in creating the "functional" hierarchy. Everything expressed in these eventually decompose into control structures and primitive operators, the raw material of all software assemblies. Now having all this logically organized we must convert it to an executable form. This means we must determine the target machine codes for our control structures and primitive operators used. A "+" operator in the expression "a + b" requires a sequence of "loading" both a and b (determining their location) and then performing an addition. Now the "+" works depending of the nature of "a" and "b", whether integer, real, decimal, binary, etc. So "+" is not an explicit call in the sense that it is a name of a lower level function, but one which we must "discover" from the set of possibilities (the instruction specifications). Thus the "+" is an implicit reference, a contextual one actually, that we must determine. Somehow over (and during) the years we (as humans) have been doing this successfully, but not necessarily optimally. It is not a question of determining the rules, because rules is what runs code generators now. But the known rules are sometimes contradictory in terms of optimal, some instances are more optimum than others. The challenge, and I don't make light of it by any means, is entering a set of rules as specifications, as conditions for the selection choices, as goals for the overall process. That means applying logic programming to code generation as a reflective programming means to guarantee consistent, optimum results regardless of source. In a sense it is a two-step completeness proof. The first step produces an optimal, logically equivalent form of the source as an logically executable sequence of control structures and primitive operators. The second step produces an optimal, logically equivalent form of a "true" executable sequence of control structures and primitive operators. Now in a rush to judgement we might desire to make the two steps here into one. Remember that the first step produces a machine-independent set of control structures and primitive operators while the second, machine-dependent. The separation allows us not only to do one without regard for the other, but allows us to repeat the other for any target environment. Notice that we can do this from any source environment. This is cross-platform to a level that no C implementation has been able to travel. "Are we on the same page? If so, please continue. All of the algorithms I know of for this are NP-SPACE; that is, they have to search the whole space and store all results." Basically, yes, we just apply different thought processes to our understanding. "I don't understand -- are you implying that your optimiser distinguishes in some way between local and global optimizations? How does it make the distinction? The process I've outlined above certainly isn't doing that." Optimization in software occurs by rules which govern the order. It occurs in every step in the patch from source to executable. It is a field which is a specialty all its own. The most interesting I have seen recently is that of the Transmeta team with their "software morphing" on the Crusoe chip. Here they would change the optimum designed for an Intel pentium to its logical equivalent form on the Crusoe. But they don't do it by hand (although they may have determined it that way unfortunately), they do it in software by rules. It isn't "my" optimizer. It's the recognition of what already exists, applying the same process of specification to all those rules, and from this produce a common optimizer that guarantees the best in an given instance. If it's made available to every vendor (who will resist giving up any advantage to a competitor), the benefit to the user is obvious: it reduces the risk of his decision. ">I've been through this argument >before, about general code versus specific code. So what did you decide? (I don't see how these two arguments are even related, much less "the same argument.")" In general I am against general code which for "esoteric" reasons offers more than what is specified. I follow the goals of "highest cohesion", which is no more nor no less than necessary. That I borrowed from the "master" Larry Constantine. If you followed the logic before on the unordered specifications and working in the small, then the "implied" effort of making a change which "enhances" a specification to meet an "enhanced" need is trivial. Therefore I do not have to "protect" myself from the "ignorance" of the user by imposing my greater "intelligence". I now have the means to track the user as he gets smarter. I don't have to anticipate his needs. "It sounds like you disagree with me, but you're unable or unwilling to discuss that disagreement. Is that accurate?" I hate to bury something this important in the middle of all this, but it does relate to the response just before this one. My major concern lies in bringing down the cost and time in software maintenance. There is little I can do about that required to write an initial set of specifications. That's a human activity that proceeds at a human speed. However, when it comes to making (and reflecting) changes in those specifications, in what we refer to as maintenance, then what I propose here in terms of reducing time and costs cannot be matched by any other means. Once you realise that more than 90% of development is spent in maintenance, in changing what you have already written, then writing in the small, allowing unordered input, and having a mechanism like the completeness proof together minimize what you have to do. This increases your productivity. Now Fare may not have thought of exactly this as part of his evolving more of the division of labor from people to machines (automation). But by putting "reflective" in the tool means reducing the amount of it the writer has to assume, an assumption which requires considerable time. "Simply unbelievable. This system, you claim, can produce correct behavior based on incorrect specs?" No. It cannot produce anything not in the specs, which it produces accurately. If you don't like the result, then correct "your" error by entering a new specification. The specification is never in error. It is never incorrect. Don't act as if it had some choice in the matter. The system will produce exactly what you specify. The only one who can make a mistake is you. Fortunately, if you look at the entire software development process from input of requirements or changes which get translated into specifications, that's the only writing you do "within" the process. In the development stages of specification, analysis, design, construction, and testing the only place you do any writing is in specification. The rest you get automatically. Therefore the only thing you have to get right is the specification. If you don't, you will know it quickly enough (as well as being led patiently through your own logic by the backtracking mechanism). You correct it without having to bother with what anyone else is doing. Understand that no matter how large the application system or operating system being treated as a single unit of work in this environment, no more than one person is necessary to initiate it regardless of the number concurrently maintaining specifications in the specification pool. Not tens, not hundreds, not thousands of people, just one. The scope of compilation is unlimited from a single specification statement to the largest assembly of application systems. All you do is select the set of specifications, input them to the tool, and review the results. Among those results are the outputs (all automatically produced) of the remaining stages (analysis, design, construction, and testing). Logical equivalence at its presentation best. No need for CASE tools, UML, compilers, linkers, source documenters, etc. Just one tool (your assistant) and you. Capable of producing a custom operating system on demand in a matter of minutes or hours, depending on the overlap between its specifications and those already in the specification pool. "Determine what?" Determine if the result is as expected with the time measured in seconds and minutes compared to days, weeks, months, and years currently. The specification is the earliest writing stage in any methodology. In this it is the only as well. "Why would you need software to tell you the obvious? If you're the one who modifies the spec, you're the one who's responsible for updating the rest of the documentation." Most installations do not have more than a partial attempt at writing formal specifications. They keep thinking that it will get fleshed out in the later stages of analysis, design, construction, and testing. Programmers who operate in the construction stage seldom want anybody earlier doing their work for them. They will encourage the skipping or glossing over the earlier stages, because they are done by dummies who have no programming experience. Thus "thinking" or scribbling on paper is not doing anything. You are only doing something when you are coding (regardless of how many times you redo it). This attitude is reflected widely in the profession. The most obvious evidence of it are the mis-named IDEs (Integrated Development Environments) offered by vendors which cover no more than the construction stage. Within it the most obvious evidence is the ability to create a "new" project or source file. You cannot in a continuous process where the output of one becomes the input of the next have anything new appearing which is not the output of something earlier. What we have are isolated processes connected through people filters who may or may not translate correctly what is given them. The net effect organizationally is to be incapable of incorporating changes anywhere near the rate at which they occur. This results in a backlog. This is what OO was supposed to correct. Nevertheless the backlog continues to increase. So I simplify this organization to communicators responsible to the users, in terms of translating requirements and changes into formal specifications as well as writing user documentation, and developers, responsible for inputting a selected set of specifications into the tool. The only reason I have developers is because it is a "lesser skilled" position than one required for communication with users, which is far more important overall. In such a system those who write specifications are those who change them. It may be that one who wrote is is not the one who changed it. Having a means of tying documentation to specifications is one way of synchronizing "people activities". "Like literate programming, only exactly the opposite -- in litprog the programmer shows the computer the relationship between the specs and the program, and with your system it seems to be the other way around (although I don't believe that's possible or reasonable)." Yes, fortunately you are correct. I solve the problem by eliminating the programmer, something I'm sure Knuth would be loathed to do. As the system creates the programs (as part of the completeness proof) and the developer simply monitors an automated process, this leaves it up to the communicators to insure that the documentation (of which now the specifications are an integral part) remains in sync. Is it reasonable? To me, yes. You are going to take some convincing. "Okay. If the programmer's responsible for specifying things, I guess that makes sense. Like in Javadoc." Hey, no programmer nor programming. They've been automated. One grand and glorious way of reducing software costs. Instead we have substituted people who can communicate and translate properly between informal (user) and formal (system) languages. It's a writing job down to two forms: user documentation and specifications. They both exist in completely mixed fashion within a data repository accessed through a directory. ">To correct that the user could >request a "list" of existing ambiguous references. But everything's ambiguous if you're going to refuse to define a single meaning for everything." But I'm not. Like any good Lisp advocate I exist in a world something can have 0, 1, or more meanings. If it has 0, it is undefined or indeterminate. If it has 1, then it is singularly defined. If it has more, then it is multiply defined or ambiguous. Now getting back to correct and incorrect specifications (whose writers have incorrectly written). 0 means I have really got something to correct. 1 means great if that what I expect, otherwise not so great. More means great if that is what I expect, otherwise not so great. Again I refer you to SQL which simply reflects what you have specified. If it doesn't meet with your expectations and your specifications are correct, then change your expectations to match your logic. Ambiguity simply indicates that more than one choice or path is possible. The system (again through its backtracking mechanism) explains the reason for the ambiguity. You can ask the system to act on it (produce the multiple results), hold off for the moment, or change the specification set in some manner that eliminates it. The point is that some ambiguity is acceptable and some is not. The system doesn't care. You do. It's job is not to dictate your judgement, but to reflect it. In order to reflect it, keeps a "list" handy for your reference. You may know that you will need to correct an ambiguity before your effort is complete, but you (and the system) are willing to live with it until it interferes with something else you desire. You make that determination, not your tool (not fool) assistant. "I like this, of course. I don't think it's possible, that's all." As we near the end of today's journey clearly neither of us has the impatience interfering with the desire to want to understand the other. While others in Tunes may want to focus primarily on HLLs and operating systems and secondarily on tools,I choose to reverse them. I look at how it is possible to deliver an HLL or an operating system in the least time, at the least cost, and the most in competitive ability. As admirable as the ends are, they can never exceed the abilities of the means. For those who might feel that an HLL is a means, I would point out that it is never more than its implementation: the means of achieving the means. No implementation should have a range less than the domain of the language. That means no compromise due to time or cost constraints anywhere along the path. It is the means available that determine the time and cost. Therefore I give it a higher priority so I don't have to skimp on the ultimately more important ends. I thank you very much for your patience. I hope that I am closer to communicating and not confusing things. I will make the time. I see that I have another response for you which I will have to forego until later. Aside for its inappropriateness to discussions of this sort I certainly wouldn't want to bring "noise" to the IRC channel. Until next time. From lmaxson@pacbell.net Thu, 29 Jun 2000 17:26:41 -0700 (PDT) Date: Thu, 29 Jun 2000 17:26:41 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals Jason Marshall wrote: To Jason Marshall You offer a accurate definition of "brute force". We have had some problems which have been amenable to no other method. On the other hand we have found ways logically to keep us from going down "wasted" paths. The fact that something produces the same result as brute force does not mean its application. What I am describing is what occurs in every AI-based logic engine, an exhaustive true/false testing process. No more, no less. While certain problems have proven intractable to either brute force or more "intelligent" means, we have a host of them which have proven soluble, which we have certainly solved in reasonable time. In your example, the long running execution of the system may continue on forever, but that which allowed it to execute certainly has completed. "...they are brute force testing a piece of information only 8 bytes long, against another piece of information only a couple of kilobytes long, looking for the single solution that is the correct one." It's not up to me to question why someone has chosen a particular method. I would probably respond that first I would get rid of 59,999 machines. I haven't talk with Gene Amdahl (or his brother Lowell) in quite some time, but remembering the great debate on parallel computing that took place between him and the University of Urbana Illinous folks in the late sixties, I may not be that far off. Maybe I just don't understand what problem they are trying to solve as it sounds like something offered with IBM's ImagePlus family of products of detecting (matching) patterns of a source image against a database of much larger images. I might suggest that the one group look to the other. "Are you familiar with the concepts surrounding the calculation of Order of Complexity of an algorithm?" The answer is "maybe, but I'm not sure." You see I make no claims to holding an "academic" interest in things which may find it "interesting", if not important, to have some measure of the "Order of Complexity" of an algorithm. I'm not sure I would have had the temerity to ask either my management or the user if they knew it either. For certain neither gave a damn. They only wanted it solved. Though I may have done some research to have a better understanding of causes and conditions, they did not care if I used brute force and discovered it by chance in the process as long as I solved it in time. I don't want to make light of academic pursuits nor those in pursuit. I have been a member of the ACM since 1962 and a participant in computer-related matters in the IEEE since before it had formed the Computer Society. For most of that time what was published under those societies meant absolutely zip to me. That did not keep from understanding the importance of continuing a publishing outlet that from time to time did create a practical communication link with the "outside" world. The point is that we have two different problems in mind. You may pursue an academic interest in providing measures which may or may not have any practical significance except as a comparison among a class of examples. I accept that every programming language I am familiar with has optimizing (not necessarily optimal) code generation embedded within it. Very seldom do you run into a instance in which a valid statement in that language does not translate into a logically equivalent executable unit. I accept that it works and that with experience it will probably improve as much or more in the future as it has in the past. Instead I look at whatever result it produces to understand where (and why) it differs from the best code from the best assembly language programmer. I focus on this because regardless of the why a solution is expressed or the language used to express it that the principle of logical equivalence exists. That tells me that regardless of the "nature" of the source I should be able to arrive at the best solution, if for no other reason than it exists or is known to exist. Now if I do that, I suspect that what I will find is not inefficient executable code. There is simply more of it somehow executed. That means I can skip over any concerns about code generation, because one way or another that one has been solved: an executable exists. What I need to understand is why there is "extra" executable code. I say "extra" because it is not required in the "best" example that is logically equivalent. Now if I examine the code, what will I discover? I will discover that if it is a block structured language like C or PASCAL of procedures nested within other procedures the "nesting boundaries" will remain in the executable code. For example, I will find references to the "stack" related to the preservation of those boundaries. If I do the same analysis on OO applications with their objects and message passing, chances are I will still see this "source" view persevere throughout to the executable level. Now if I am going to engage in reflective programming I ought to have enough reflection capability to know that once I am passed the source, which is my user interface, that I don't have to maintain that form if I can have a better performing logical equivalent one. I don't want to shock anybody, but long before Smalltalk and OO appeared on the scene we successfully produced as complex and complicated software as has ever been produced by OO means. Prior to that time as a vendor you caught hell if your code production did not come close to assembler. "Because the order of complexity of the alternative makes people quake in their boots, laugh nervously, or roll around on the floor giggling insanely and pointing in your general direction? It's a mindbogglingly large calculation." Well, I'm happy if I provide some comic relief in these discussions. It's only fair. I have to admit to a fair amount of smiles resulting from reading the references suggested here. Real guffaws happen when I run across "preliminary" and a belly whopper when "it needs more work" occurs. I would guess that if we cannot gain insight from all the material, that entertainment gives it some "social, redeeming value". Of course, why should anyone who writes as much vacuous and superfluous and empty of meaning material gain so much humor elsewhere when obviously I have an overabundant supply of my own. This old dog who has been solving computer-related problems since June of 1956 has not giving up on his joy in doing so. I began with doing "actual" repair of failing computer logic and moved on to software "repair" from there when it started getting too small (and uneconomical) to repair. My coding career began with actual (entering diagnostic programs from the switches on a console) through all the language generations up to now. So "shoo if you must this old grey head, but watch your logical tack, he said". From lmaxson@pacbell.net Thu, 29 Jun 2000 19:01:20 -0700 (PDT) Date: Thu, 29 Jun 2000 19:01:20 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals Billy wrote: "Pardon, I deleted your story because it made no sense to me whatsoever. I mean, I understood the word, sentances, and paragraphs, but I don't understand why it has anything at ALL to do with your point." The point is that it is an acceptable practice by at least one vendor to translate different language source to a single (machine-independent) format and then apply a machine-dependent optimization process. That's what's done today. I haven't proposed anything different except in terms of implementing the optimization process. "Because you can't do that. It's impossible." Therein lies the crux of the matter. Having programmed in assembler for a number of years and certainly having been exposed to the assembly language version of compiler output, I tend to think it more possible than you. I don't care what the complexity index is of a computing algorithm. What I do know is that it translates into an executable sequence in postfix operator notation. That's how a stack works. It may vary due to the availability or not of registers on a target machine, i.e. how many I load either directly to registers or to a stack before executing an sequence of operators, but the process has been "copied" since Burroughs introduced it in hardware in its B500 series in the late fifties, early sixties. Most of the languages have been fairly restrictive in terms of typing unless, like me, you have been privileged to program in PL/I. When you compute an expression of mixed operators and mixed operands (character strings, binary integers, variable precision fixed decimal, and floating point), you get to read some internal gyrations to get it all to come out correctly. Most other programming languages require that you explicit convert types before using them in a common expression. Maybe I'm just used to a programming language world where they did not ask how difficult it was, just do it. I have offered a much broader look at the optimization "problem" in a response to Jason Marshall. If you followed that, the key is not in this code generation or optimization here, which frankly is a done deal (and has been since the first program), but in generating "unnecessary" code. This arises, IMHO, from the rather foolish need to maintain the boundaries of "objects" (including processes as objects) once "passed" the source interface. Contrary to these ungodly volumes of logical equivalents through translation from one form (symbolic) to another (actual), the translation is one of "reduction", of tossing out that which on this side of the user interface that which is no longer necessary. I find it interesting that somehow this lies outside the "considered" range of reflective programming when it clearly exists in the "descriptive". I am doing or proposing no more than what Fare expresses in defining reflective. If it is "impossible", take it up with him. "No, it's not -- you were correct when you claimed above that all modern systems use a process of discovery which is not contained in the compiler. AI systems in particular work ENTIRELY differently; I don't know where you get off claiming that neural nets have any sort of concept of logical equivalencies (neral nets are analog devices). Prolog is effectively just another compiler. SQL is an interpreted language, and although many companies try to accelerate it, they certainly DON'T try to apply all possible logical transformations to it." Ah, where to begin? Prolog is effectively another compiler due to its implementation, something its advocates screwed up on in reducing a specification language to function in the construction stage only. To have not done so would have meant incorporating the results of analysis and design (both of which derive directly from the completeness proof) between the specification process (translation of user input) in the specification stage (first step). You see they broke up (made discontinous) a continuous software development process. They can hardly be faulted, because they had to compete with everyone else who was guilty of the same crime. To know if you are guilty look at your development tools in construction to see if you can create new input source files or if the scope of compilation is no more nor no less than a single external procedure. If the Prolog people then get off a "batch" compiler and go to an interactive tool which accepts an input specification set (which determines the scope of compilation) and automatically produces the results of analysis (dataflow) and design (structure charts) as well as the construction (completeness test) and testing (exhaustive true/false) which they currently provide, we (and they) will come much closer to having what we ought to have in any specification language. Having done considerable SQL programming in PL/I and COBOL and gone through the "binding" process required, I will assure you that it is not always interpretive (or at least not completely so). The biggest problem that you have from a performance view (again IMHO) lies from dealing with "purists" who allow nothing but third normal form. No since trying to explain to them that the performance issue lies in minimizing physical i-o when they can play in such esoteric "heavens". As to rule-based AI expert systems they work internally exactly like Prolog's logic engine. Divorce yourself from "all possible logically equivalent forms of an executabe", accept that they have only one executable form (at a time), and that the exhaustive true/false only occurs with their input however generated or presented. They work the same. I have several neural network programs. Though their graphical descriptions of boxes and connections suggest (to you) that they are analog, they are not. I have worked with analog computers. Believe me they operate differently. Unless you have a physical A/D converter in your computer, no program is performing analog operations: pure digital. I only continue to mention neural nets because Jason Marshall introduced the notion of a heuristic as a uniquely human activity. If I had a set of successful and unsuccessful results, I may train a neural net to "wing it" if I cannot come up with the necessary rules (only examples). Now I have been scolded more than once for even suggesting that this is possible with neural nets. The only reason I hold out some hope is my experience with D. Ross Ashby (another of the early cyberneticians) in his "Design for a Brain". The point is that you have to be willing to let a system fail (die) as "natural" as surviving. It is possible that it will never do what it was supposed to do. You can't say what it was designed to do, because you have no way of incorporating design (logic) in it. I reference you to Ashby's "homeostat". Now given that someone has allowed up to 60,000 desktop computers to pursue a task is it possible to allow reflective programming to occur in which the behavior of the system, of the components within the system, will adjust the ratio of rule-based AI to neural-net depending upon the current "state" of the system? In other words instead of insisting on their separation, can we possibly meld them dynamically in a cooperative process? Ah, but that is something else entirely. "I agree that your goals are desirable. I disagree that they're possible." In that sense we are agreed on one thing. What remains then is to resolve where we disagree. You may be correct. Maybe current technology (hardware or software) will not support it, so that we must wait until a more appropriate moment. Or I may convince you (and the remainder of this austere audience) that if we engage in a common rethinking process where we allow the different thoughts to percolate we might very well find the impossible possible. "It's not? Why do you believe that the world is actually made of objects? Why should the world not be actually made of processes? Or perhaps some combination of the two?" Well, I agree with the response offered by Jason Marshall, which is essentially what you suggest. The point for raising it lies in rethinking these things through why should one view obscure the other? Why not a greater parity? I think when we talk about "classless objects" or "no multiple inheritance" (perhaps even no inheritance at all) or "namespaces" instead of the process world of "names" that we insert unnecessary "blinders", keeping us from a more balance and complete view of the composition of our universe, even if it is one of our own creation. I do believe that both exist concurrently (simultaneously) and that one has a logical equivalent in the other. This allows one for user convenience (productivity) to exist at that level while transforming internally to a more productive form in execution. This view does not assert that one is "better" than the other, only that one may be more appropriate than the other in a particular context or for a particular use. At least until someone provides a unification. Again I hope I have communicated better on some issues without raising more confusion overall. If there is something you feel yet incomplete or in error, then let's continue this process. I do it because each time I learn something else about what I thought I knew as well as some things which I obviously didn't. From jason@george.localnet Thu, 29 Jun 2000 19:13:22 -0700 (PDT) Date: Thu, 29 Jun 2000 19:13:22 -0700 (PDT) From: Jason Marshall jason@george.localnet Subject: Proposals > Jason Marshall wrote: > > To Jason Marshall > > You offer a accurate definition of "brute force". > We have had some problems which have been amenable > to no other method. On the other hand we have > found ways logically to keep us from going down > "wasted" paths. The fact that something produces > the same result as brute force does not mean its > application. What I am describing is what occurs > in every AI-based logic engine, an exhaustive > true/false testing process. No more, no less. I still suspect we're talking past each other. Perhaps we don't have the same definition of 'exhaustive'. When I or most of my peers say/think exhaustive, we mean 'look under EVERY rock', not just 'look under all the probable rocks'. When you start introducing algorithms that trim your search space, you're doing a directed search. Unless it is provably impossible for a particular direction to contain a successful 'hit', then you're using a heuristic, and no longer doing an exhaustive search. Any commentary regarding the rest of your comments hinges on the formal definitions, so I'll hold off on any commentary. > I don't want to make light of academic pursuits nor > those in pursuit. I have been a member of the ACM > since 1962 and a participant in computer-related > matters in the IEEE since before it had formed the > Computer Society. For most of that time what was > published under those societies meant absolutely > zip to me. That did not keep from understanding > the importance of continuing a publishing outlet > that from time to time did create a practical > communication link with the "outside" world. I can assure you I am not by any means an academic. -Jason From steeltoe@mail.com Fri, 30 Jun 2000 07:57:12 -0400 (EDT) Date: Fri, 30 Jun 2000 07:57:12 -0400 (EDT) From: Steeltoe steeltoe@mail.com Subject: Proposals Hi Tunes people! I have never written to this mailinglist before, neither have I looked at a= ll the documentation. However, I have read my fair share, even the Arrow Ph= ilosphy paper which I didn't quite understand I must admit. I have yet to r= ead on how Slate works, mainly because the webpages on it were not function= al. I have seen the concepts of Haskell, Forth and Self, and must say those= projects look very promising to me. That should be enough "I have"'s ;-) I see now an unnecessary discussion going on with optimization that may not= have much to do with Tunes at all. Let me just say that what is basically = proposed is called 'inlining functions' is C++. I'm sure most of you know w= hat this is; a static means of pasting small functions inside the scope of = the caller in the compilation process. The optimization effects you get fro= m this is astounding. No jmps, no unnecessary push and pops, the compiler m= ay use only those registers and operators that the optimizer finds most app= ropriate, juggling these to a best fit. My experience here is with the exce= llent Watcom C++ compiler which sadly loses economically to inferior compet= itors I don't want to name in this forum. The Watcom C++ compiler DO try ou= t many variations of code to see what "fits" best, just like a chess progra= m tries to solve a chessgame (heuristic search with branch pruning). The ou= tput from this compiler is pretty to an assembler programmer who do not wan= t to program assembler anymore. :-) The Watcom C++ compiler also supports u= sing registers as arguments in ordinary (non-inlined) functions. However, y= ou lose the flexibility of arbitrary registers here. All such optimizations comes at a cost however. The smallest exception here= is inlining functions as I see it, where inlining a small functions make f= or smaller codesize and more efficient code (depending on the platform of c= ourse). When you make the compiler create optimized code, the OO architectu= re is being torn apart in the binary output. This has a great impact on dyn= amic bindings as is case with dynamic libraries and calls from code written= in another language. Each function that shall be accessible needs to be at= omic in the sense of not being nested with other functions having cross-dep= endencies. They also need a common point of entry. Thankfully an inlined fu= nction may have its own unique entrypoint too, but what happens to the inli= ned code if you want to change that function after compilation? If you over= -optimize your system, you will have to recompile your whole system when so= mething changes. Especially when you don't know your dependencies. Also, inlining all functions do NOT generate less code, however it will gen= erate more efficient code. The benefits of this diminishes with the size of= the function to inline however. I believe the tradeoffs of various C++ com= piler makers are correct. Yes, you will have different results depending on= which compiler you choose to compile with. That's the whole point of havin= g different compilers. We don't have the Perfect Compiler yet. As a program= mer I can live with that, there's not that much difference anyways. Some optimizers use more general optimization techniques. Rolling or unroll= ing loops, inverting loops and "smart" references. These may very well crea= te more trouble for you than you gain, since the C++ language is not a good= language to formalize what constraints and capabilities you've got. There = is always a limit on how smart the optimizer, and its implementors, can be. I believe there is no perfect binary output. It depends on values of the ob= server. If you believe that the most efficient code is the best code, then = turn on all optimizations. Afterwards, you may want to review the assembly = output. However, keep in mind that optimizing for Pentium processors is not= something humans should do by hand. You may tweak all you like in this are= a, but I believe we have reached the peak here. There is not much more to a= chieve in this area (optimizing machine-code) IMHO. And if we're going to t= alk about optimizing in a broader sense, we should wait until we have somet= hing more concrete to optimize... However, I do agree that a specification language is needed and should be d= iscussed _in concretness_. There's little to correct and build on abstracti= ons. I'm sorry to say I don't know what kind of a language Slate really is,= so pardon my ignorance in this area. However, the meta-language of everyth= ing that exists can be described as a specification-language. Thus this lan= guage/system could reside over everything. It doesn't have to be _executing= _ the language (in this example Slate) that do stuff, it merely has to desc= ribe it _in relation_ to an execution unit (also described/mirrored in the = specification language). Such a specification language will in my mind, bin= d defined concepts with external interfaces to the environment of your syst= em. In effect, these interfaces acting as "primitives" and "gateways" in yo= ur system which concepts will try to mirror. I believe the Arrows Philosoph= y may help to get everything connected in this regard, or view the whole me= ss from a more analytical view, helping an AI unit (a person if we don't ha= ve one handy) to regulate the system without breaking things. There will al= ways be limits to reflection, however, should be torn down wherever we find= them. This is just some thoughts I have on the matter. I know I'm way over my hea= d here, because everything seem to be interconnected and you need something= to start with to get it all going (an LLL or just a C++/whatever program?)= . If you want this to be discussed further please say so, or I will hold my= breath. I do not want to propose things that have already been "solved" fo= r you, so you may want to point me in the direction where it is all found (= not just read all IRC-logs, mailinglist-spam, etc, I don't have the time to= search everything). No words are worth silencing in the long run. Sincerely yours, - Steeltoe ______________________________________________ FREE Personalized Email at Mail.com Sign up at http://www.mail.com/?sr=3Dsignup From youlian@intelligenesis.net Fri, 30 Jun 2000 09:21:23 -0400 Date: Fri, 30 Jun 2000 09:21:23 -0400 From: Youlian Troyanov youlian@intelligenesis.net Subject: irc logs Please somebody fix the irc logs. All the data since June 26th seems to be accumulated in 2000.0626 and I am not authorized to view the page, IE5 says. Also the new format (from June 25th for example) is hurting my eyes. Too many "[0m"'s. Thanx, Youlian From jbowers@perspex.com Fri, 30 Jun 2000 09:30:33 -0400 Date: Fri, 30 Jun 2000 09:30:33 -0400 From: Joseph Bowers jbowers@perspex.com Subject: Proposals Re: Declaritive Syntax/Logical Equivalence/Code Generation debate Perhaps a good way for Lynn to illustrate the ideas brought up would be to build a small prototype/demonstration system, or at least a concrete specification/set of design documents, and show them to the list. This might clear up many points of confusion/dispute... Joe Bowers (jbowers@perspex.com) From lmaxson@pacbell.net Fri, 30 Jun 2000 08:04:16 -0700 (PDT) Date: Fri, 30 Jun 2000 08:04:16 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals Jason Marshall wrote:"...Unless it is provably impossible for a particular direction to contain a successful 'hit', then you're using a heuristic, and no longer doing an exhaustive search." I don't know why this provides such concern. Fortunately this is not my invention but one which has developed over the years in AI logic engines. I don't know if we are in sync with respect to "heuristic", but certainly with respect to "directed search". Allow me to offer an example. Spiders have 8 legs. Beetles have 6. I have a box containing spiders and beetles. In performing a count I come up with 46 legs. Now the question is how many beetles and spiders are in the box? The logical expression for this is: 8s + 6b = 46. Now to set the conditions. The number of spiders or beetles must be equal to or greater than zero. Now the fact is that the number of spiders cannot exceed 5, and the number of beetles 9. Thus the results for each must lie between 0 and 5 for the spiders and 0 and 9 for the beetles. The resultant set of values (as there are two valid solutions) must lie with this range. Now there are probably other ways I could instill "clever" logic, logic that I would use to avoid "brute force" on my part and that I was willing to transfer to the software so that it could avoid it on its part. Now this probably falls under your category of a "directed search". I have no argument with that. In turn it produces the same result as an "exhaustive search" using "brute force". They are then "logically equivalent". Certainly it is provably impossible for upper limit values above the calculated ones to provide a "successful hit". Now this example comes straight from Trilogy, a DOS-based product (1988) written by a professor of mathematical logic, Paul Voda, now in a university in Slovakia. This product is based on predicate logic which as you know means "For any..." or "For every...". The part I inserted about the upper limits is part of the setup or evaluation process encoded within its logic engine. In your terms it is a directed search. I agree. In my terms it is "logically" the same "as if" an exhaustive search had taken place. In that sense it satisfies the exhaustive true/false proof process of logic programming. I am in no mood to "scold" someone about "truth in advertising" if no deception has occurred. It seems to me that this form of "self-examination" to avoid doing the unnecessary falls well within the scope of "reflective programming". Certainly the ability to express "for s and b greater than 0 and n = 46, determine s,b when 8s + 6b = 46", knowing full well that the results will return 0, 1, or more "true" instances. Now in logic programming this expression is treated as an assertion. Rather than prove that it is true, they take the tack of proving that it is false. To show that it is false they (logically) enumerate all possible value sets and test each one. If every test fails, then they return 0 (false), otherwise they return the true instances. Not surprisingly this 0, 1, or more is a "piece of cake" in list processing languages like Lisp. For that reason many of the first AI expert systems were written in Lisp. In fact as this lies at the "core" as it were of Prolog, many of the first Prolog compilers were written in Lisp. If we rewrite our spider and beetle assertion as 8s + 6b = n, note that for a given range of values, say (n,s,b) => 0 & =< 100 we can solve for all possible value sets of (n,s,b): three unknowns. As in our problem, in which we took only one possibility, we can solve for 2 unknowns (with one given): (s,b), (s,n), (b,n). Or for 1 unknown (with two givens): (s), (b), (n). Or for 0 unknowns: true, false. Furthermore we can use specific values, ranges of values, lists of values or value sets. The set up to the logic engine is the same. It establishes the "stated" conditions plus some of its own to avoid "provably unnecessary" tests and rearrange the equation to a logical equivalent for the particular type of solution desired in terms of givens and unknowns. Now frankly I like the idea of having a single expression which I can invoke with any of these "condition sets". I, as the user, don't have to worry about "how" it is done. I only have to know "what" I want done. If you look at it, the form is quite close to what the user would have used in his algebra course. If he saw it with the conditions, it would look familiar to him. He would understand "what" happens even if he unsure "how" to make it happen. "Any commentary regarding the rest of your comments hinges on the formal definitions, so I'll hold off on any commentary." Well, don't hold back. Admittedly they are preliminary and need a little work. From lmaxson@pacbell.net Fri, 30 Jun 2000 08:28:53 -0700 (PDT) Date: Fri, 30 Jun 2000 08:28:53 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals Steeltoe wrote:"...I see now an unnecessary discussion going on with optimization that may not have much to do with Tunes at all. Let me just say that what is basically proposed is called 'inlining functions' is C++." No argument from me. Just delete all references in tunes-associated documentation to (relative) performance. "inlining" as a meta-programming option is worthy of discussion. I'm happy that we see eye-to-eye on using a specification language. Until a more published one comes along just use Prolog. From lmaxson@pacbell.net Fri, 30 Jun 2000 08:45:34 -0700 (PDT) Date: Fri, 30 Jun 2000 08:45:34 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Proposals Joseph Bowers wrote:"Perhaps a good way for Lynn to illustrate the ideas brought up would be to build a small prototype/demonstration system, or at least a concrete specification/set of design documents, and show them to the list. This might clear up many points of confusion/dispute..." If you exclude AI systems, I have only two languages, Trilogy and Prolog. Trilogy is effectively defunct while Prolog (PDC) is available for download. The text which I use for reference purposes is "Simply Logical: Intelligent Reasoning by Example" by Peter Flach. The declarative issue (as I understand it) lies with the need for a user to "explicity" declare the attributes of a variable used in an expression or simply allow its use in the context of an expression to act as an "implicit" declaration of those attributes. It's no big deal with me in PL/I as it supports both. As it does so I do not understand why we should make an arbitrary choice when no choice is necessary. I'm willing to drop code generation issues, as (1) solutions obviously exist, and (2) it only occurs after all other (logical) issues have been resolved. As the Tunes HLL (and the other broad areas within the Tunes project) has yet to resolve the logical issues, I say do that first and then the other. The principle of logical equivalence is embedded within formal logic itself. From jason@george.localnet Fri, 30 Jun 2000 08:10:52 -0700 (PDT) Date: Fri, 30 Jun 2000 08:10:52 -0700 (PDT) From: Jason Marshall jason@george.localnet Subject: Proposals > I don't know why this provides such concern. > Spiders have 8 legs. Beetles have 6. I have a box > containing spiders and beetles. In performing a > count I come up with 46 legs. Now the question is > how many beetles and spiders are in the box? > > The logical expression for this is: > 8s + 6b = 46. Now to set the conditions. The > number of spiders or beetles must be equal to or > greater than zero. But this is a simple equation with only two free variables. Do you honestly think this scales up to 100 free variables? Are you familliar with the Travelling Salesman problem? A frugal salesman needs to visit 100 cities to peddle his wares. In the interests of improving his margins a little, he sets about to determine the optimal path he can take between the cities, such that he drives the fewest miles possible to visit all the cities. It has been proven mathematically that this is an N*P complete problem, meaning there is no way to solve the problem in less time than it takes to calculate all possible routes and pick the shortest one. For a hundred choices, that's 100 factorial. Now, take a program ten million lines long (these do exist, contrary to your earlier claim that they don't. Windows 2000, for instance, is over 50 million lines of code). Assume one 'choice' per ten lines of code, and you have a million factorial, or a number bigger than a modern computer could count to in your lifetime. This is the sort of monumental task someone trying to write a 'perfect' compiler faces. This is why most people resort to 'hill climbing' algorithms (if you're trying to get to the top of the mountain via an almost-but-not-quite-shortest route, you pick the direction that is the steepest incline in your vicinity, which gains you altitude faster), or other greedy algorithms, which frequently get within ten percent of the optimal solution in a small amount of time. Now do you understand the sabre-rattling? Regards, Jason Marshall