Lambda (was: Refactoring in Tunes)
Massimo Dentico
m.dentico@teseo.it
Thu, 20 Jan 2000 21:50:54 +0100
John Carter wrote:
>
> On Thu, 13 Jan 2000, Massimo Dentico wrote:
>
> > John Carter wrote:
> > > Careful, as far as I can see Joy has one good idea and that is superb,
> > > but it is not an idea that will solve all the problems Tunes is
> > > addressing.
> >
> > > The heart and joy of the Joy idea, (correct me if I'm wrong), is that
> > > by creating a simple homomorphism between the syntatic operation of
> > > putting one symbol next to another and the function composition you
> > > create something which is trivial to create an extensive algebra on.
> >
> > As a side note this abstraction maps directly to machine language also:
> > a composition of N instructions is given by putting one instruction
> > next to another in memory. In your opinion, what is the consequences of
> > this? (or .. please can you elaborate on "which is trivial to create an
> > extensive algebra on."? I'm interested.)
Thanks John for the interesting replay.
> The crucial paper is
> http://www.latrobe.edu.au/www/philosophy/phimvt/j04alg.html
I'm reading Joy documentation slowly, so this hint is precious.
> Firstly lets discuss why having such a tidy algebra is so nice.
> 1) It allows extensive automagic rewriting of a Joy program without
> changing the semantics. ie. Automated refactoring becomes a
> relatively simple branch of algebraic simplification.
> 2) It allows far larger degree of program proving.
> 3) If its simpler to prove things about your program, its simpler to
> understand and debug it.
Absolutely good properties, I agree.
> 4) One of the hairier features in a Scheme implementation seem to be
> the hygienic macros. (You have to "rename" variables to prevent
> accidental clashes with variables outside the macro. Having done
> so the resulting code is full of generated bindings that is not
> so nice to understand.) The "namelessness" of Joy gives you the
> hygiene for free.
If I'm not wrong, there is a similar problem with unification:
in this case the substitution fails thanks to the occur check.
In some (most?) Prolog implementations the occur check is omitted
to avoid performance penalties. Here also variable substitutions
are the problem.
> Now lets pick up the things in Joy that gives us this.
>
> a) You're right. Each patch of machine code can be considered as a
> unary function from the entire state of the machine to the new state
> of the machine. And concatenation strings of machine code maps to
> function composition. Simplification number one to note is that Joy
> maps stack to stack rather than random access memory + bizarre
> registers to RAM + bizarre registers.
... and is not so difficult to map stacks on registers as
Peter Knaggs show in his thesis "Pratical and Theoretical Aspects
of Forth Software Development" - chapter "5.6 Optimisation using
a Stack image" - p. 40 (but Joy quotations on stack maybe a problem)
- http://dec.bournemouth.ac.uk/dec_ind/pknaggs/thesis/index.html
Probably this technique is more simple than others register
allocation techniques and quite efficient. I'm not an expert
in these techniques, so feedback (agreements/disagreements)
is welcome.
As a side note: with respect to semplicity, stack machines
win versus register machines.
> b) Whilst machine code has this property, most HLL like
> C/Ada/etc. explicitly do _not_ have this property. I claim
> that to use reflection in any language that doesn't have this
> property is difficult.
>
> c) Related to this is this quote from "The Algebra of Joy"....
>
> If Q1 and Q2 are programs which denote the same function and P and R
> are other programs, then the concatenations P Q1 R and P Q2 R also
> denote the same function.
>
> To get this behaviour in anything with variable names is actually
> quite tricky. We do it, yes, but not nearly so simply. Its the utterly
> profound simplicity here that is absolutely striking.
>
> Its so blooming simple that you have to sit back and try this in
> whatever other language you can think of to realise how gobsmackingly
> profoundly simple this is. It looks trivial but try it in anything else
> you care to shake a stick at to realise that deep deep magic is going
> on here.
> p( q1( r( x))) = p( q2( r( x))) for all x OK that was easy.
>
> p(a, b, q1( c, d, r( e, f), g)) == p(a, b, q2( c, d, r( e, f), g))
> whoopsy we now saying a lot about the arity of the functions p,q1,q2,r
> that we didn't want to get involved with.
I think that this example makes self-evident your argument.
> I'll admit to still being in deep thought as to whether Joy adds
> anything to the whole business of Category theory or not. I suspect
> not. I suspect better semantics for Joy would do a lot more in that
> department.
I admit that my understanding of CT is still intuitive and superficial
to discuss this.
> d) Combinators and eval.
>
> If you want to write a very obfusticated Perl program, make sure you
> use the eval function. In fact every truly obfusticated program I have
> seen has some form "eval" in it. There is something truly evil about
> eval.
>
> The only useful uses I have seen of eval have been for ...
> i) Allowing users to enter expressions instead of just numbers, you
> can then eval the expressions entered.
> ii) Working around language misfeatures.
> iii) Language extensions.
>
> Yet Joy centers around combinators and combinators are merely fancy
> forms of eval! If eval is evil in other languages, why does
> it become "good" here?
>
> Because in Joy reflection is so simple and hence ingrained that the
> operations of macro expansion, constant folding and code generation
> can be folded into concept of combinators.
>
> Because the algebra is so simple and Joy is so trivially reflective,
> the clean algebra I talked of earlier extends to the
> combinators. ie. There is a clean algebra giving you the effects of
> your evals. In Joy eval is not evil, its as clean as the rest of Joy.
>
> Forth and Postscript have similar properties.
This is another very good argument theoretically. We need to prove it
pratically (but I suspect that it's strongly true).
> The Bottom Line
> ===============
>
> The problem I foresee is that any HLL built on the LLL risks
> destroying these properties and hence something precious. ie. I would
> predict that for Tunes to succeed in its stated aims the HLL _will_ be
> exactly the LLL, but with a excellent suite of library routines.
Laurent Martelli wrote:
Laurent> I think I agree with this. I don't see the point to having
Laurent> a LLL and a HLL. But I think we need fancy GUI so that the
Laurent> language can be user friendly.
Bill Tanksley also wrote (answering to John):
Bill> I would say you're almost right. I was doing some thinking
Bill> on this, though, and pondering what Forth could borrow from
Bill> Joy to make Forth better (and vice versa). I came to the
Bill> conclusion that a Joy-ified Forth would make the perfect LLL
Bill> to go together with a more sophisticated Joy as a HLL.
Bill> The LLL would not have GC, nor would it have many of the
Bill> useful things we were expecting in an LLL; it would simply
Bill> define a powerful VM and a reflection system, and everything
Bill> else (including the GC for the HLL) would be written in it.
Bill> The LLL would be designed at the same time as the HLL, so that
Bill> the two would have some similarities. The VM would be designed
Bill> on the basis of Machine Forth, to be similar to modern processors
Bill> but based on a stack system (at least two stacks).
I agree also. I think it's possible to blur the distinction between
LLL and HLL- (making LLL the core of HLL). My idea is to select
carefully a little set of abstractions that map directly to machine
level (1:1 or near 1:1) and construct the rest on top of it. It is not
necessary that this set is equal on all real machines (see LLL Tunes
pages). We can specify the HLL in term of algebraic laws, in particular
*every* abstraction is specified in term of others, so given 2 distinct
sets of abstractions (subsets of HLL) that maps directly to 2 distinct
real machines is *always* possible to reconstruct the entire HLL.
I like the idea of Bill to develop LLL and HLL at the same time.
It's interesting also to model (incrementally) the machine in term
of LLL/HLL to prove useful properties about programs and to synthesize
programs (see on the Tunes general mailing list the thread "An approach
for implementing TUNES" - 1999/4/7 - the proposal of Brent Kerby "Iepos").
I strongly advocate the traditional Forth approach to portability:
a minimalistic core in assembly and/or metacompilation. I'm against
the use of C language because usually it's hard to verify a C program
and a C compiler (even the most simple C compiler is complicated
compared to a Forth environment). You know it, a C compiler itself
is often written in C and in this case the HLL interpreter/compiler
also, so the problem is amplified.
For the same reason I don't like the approach of Brian with Slate:
the virtual machine (VM) used by Squeak seems too complicated. I think
it's possible to obtain the same expressive power with a very simple VM
(compared to Squeak VM). It's only a conjecture, of course, but I prefer
to pursue the way of semplicity. In particular, IMHO, arrows maps quite well
on Forth-like words: a word definition is a composition of others words as
an arrow is a composition of others arrows (or am I wrong? I refer to
the cited "A Crash Course in Arrow Logic"), so if "a" is the composition
of "b" and "c", written as "Cabc", we can equivalently write ": a b c ;".
We could impose that a colon definition (":") must obey to the same laws
valid for the composition and make equivalent the two writings.
Moreover we can see a colon definition as an s-expression, but without
the explicit mention of parameters (only constants are cited).
> The main area of thought I think is needed is how do you create a good type
> system and genericity on top of the Joy basis.
Is it possible in yours opinion to go a step beyond Cayenne toward
an even more strict unification between the language to express values
and the language to express types?
About type inference, I suggest the soft typing approach where the type
inferencer costantly present the inferred types to the user that can
eventually write type annotations *only* where necessary.
See "Pratical Soft Typing", by Andrew K. Wright
- http://www.cs.rice.edu/CS/PLT/Publications/thesis-wright.ps.gz
and others related pubblications from Rice PLT:
- http://www.cs.rice.edu/CS/PLT/Publications/
and "Incremental Type Checking", by Emma van der Meulen
- http://www.cwi.nl/~gipe/language_prototyping/inc.html
I hope that these ideas are express clearly and it don't seem
too much stupid or naive! :o)
--
Massimo Dentico