Joy, Lambdas, Combinators, Procedures

btanksley@hifn.com btanksley@hifn.com
Fri, 18 Feb 2000 18:01:17 -0800


> From: iepos@tunes.org [mailto:iepos@tunes.org]

> I'm going to do a bit of clipping in my reply...

Cool.  This makes the message a reasonable length!

> > Having a finite number of "primitives" is
> > not required by any theory you've stated; it's simply an arbitrary
> > restriction you've placed upon yourself.

> This statement reveals a misunderstanding by what I mean by 
> "primitives".
> By a "primitive" of a language, I mean an expression of the language
> which has a precise meaning that is not determined by the meaning
> of its parts. If a language has a scheme of expressions with
> similar meaning, then those expressions are not primitives
> (by the definition of what I mean by "primitive").

That's a facinating definition.  Yes, it makes sense.

> When I mean "primitive" in this sense, I'll try to say "word" from
> now on, since I may use "primitive" in other ways sometimes.

Okay.  I'll try to observe that as well.

What's kind of odd is that your first definition of your purely applicative
language included the phrase "a finite number of primitives," and I've been
trying to figure out what you meant ever since.  Evidently it was merely a
typo, since in your definition it's impossible to have an infinite number of
primitives.

> > > Well, your "stack(--)" construct would then be much like a 
> > > lambda construct of applicative systems (note that it requires the
> > > use of variables like "a", "b", and "c"). This would 
> > > probably cause complications in proof systems.

> > Not at all, because the pseudovariables would not 
> > intermingle at all with
> > functionality.  It's nothing more than a new naming 
> > convention for a subset
> > of the combinators, really.  The 'C' combinator under this 
> > system has the
> > name 'ba' or 'stack:(ab--ba)'.  (In my original design and 
> > implementation
> > under Pygmy Forth I accepted input of the simple form 
> > 'abc--cba' rather than
> > requiring the 'stack:()' notation.)

> I can see that you like to think of "ab--ba" as a word, just another
> way of writing "swap", rather than as a complex expression formed
> from "--".

Right.  Although I also recognise that it IS a complex expression, of
course.  The point is that it's perfectly conceptually correct to picture it
as a primitive word, in which case the system can have an infinite number of
primitives.

> If you only used this abbreviation for a few stack-operators
> (i.e., if you renamed "dup" to "a--aa", "pop" to "a--", and a few
> others), then it really would make little difference to the system.

Also correct.

Keep in mind that I'm not interested at this point in things which "make
little difference".  The interesting thing about this addition is that it
makes no difference to the number of programs which can be written or to
their provability, but rather makes it easier to write them without making
it harder to read.

> On other hand (and this is the original point I was trying to get
> across), if in general you accepted all expressions of the 
> form "...--..."
> in the system, then this would be quite a change to the system.
> The system could no longer be implemented in the "switch-case"
> way that I suspect it is, since there would be an infinite number
> of primitives of the form "...--...". 

Almost correct.  The system already can't be implemented as a switch-case;
its implementation is like Forth's:

while next_word(source):
 if ( dictionary lookup ) then ( execute )
 else if ( interpret as number ) then ( push on stack )
 else error;

The new addition I'm pondering is changing this from a sequence of if
statements to a list of tests and actions, like this:

def system.lex(source):
  while not source.eof():
   internalSource = source.copy()
   for parser in parser_list:
     lexeme = parser.lex(internalSource)
     if lexeme: return lexeme

There will be extensive in-language support for parsing and lexing, of
course, to make it easier to define complex parser_list additions.

One reason I want this is that I'd like to be able to imitate the way Rebol
handles some of its things.  For example, %filename.txt is actually a
reference to the named file; http://www.rebol.com is the named site, and can
be read from via the usual 'read' functions.  It's _really_ cool how they do
it, and I think I can do way better.

> In other words, whereas now Joy has specific support for just a few
> stack-operators like "swap" and "dup", if this new "--" notation were
> adopted in general, then it would not be enough for Joy to support
> just a few of the expressions of the form "...--..."; it would have
> to support them all. This would likely be done in one of 
> these two ways:

Actually, it would have to support a finite-length subset of them all.  I'm
not a stickler for impossible things.  The very fact that I'm only
specifying lower-case alphabetic letters on the left hand side limits the
complexity to 26 inputs, and I'm not interested in more (although I don't
see a reason to limit it to fewer, even though I strongly hope nobody ever
uses that many).  It's possible to have as many outputs as you want, but
it'll consume a huge amount of stack space for no obvious purpose (I doubt
I'll artificially limit it, though; if someone wants to write "26DUP"
they're free to do so).

>   1) by adding special support in the interpreter loop to handle
>      execution of expressions of the form "...--...".
>   2) by preprocessing the program, replacing expressions of the
>      form "...--..." by equivalent expressions formed from
>      just "dip", "swap", "dup", and "pop" (and possibly a few
>      others).

Both of these are wrong, as I mentioned above.  The main point I mentioned
the stack:() notation was to explain another, more interesting idea: that
the action to take when the usual lexing fails doesn't need to be printing
an error message.

> I'm guessing you'd prefer the second approach (a textual preprocess
> may not really be necessary; the replacement could happen as
> part of the parse, if the system parses its text before
> executing it; or, it could do the replacement at the last
> second, when interpreting the expression); either way would
> require significant modification to the system. But still,
> it would not necessarily be bad.

That's a good way of looking at it, so long as you don't take the word
"preprocess" too literally.  It's almost literally postprocessing, actually;
it only tries to interpret a word as a stack:() notation if it doesn't make
sense any other way.

> I'm still quite interested in knowing of an "abstraction algorithm"
> that could be used to rewrite "...--..." expressions using
> just "dip", "swap", and such. You've hinted that "dip" will
> be important, although surely other operators will also be needed.

I don't have the algorithm written out, although I'm sure I can find it if I
need to; I've seen it before.  I was actually planning on working it out by
hand, since it's not immensely complicated as such algorithms go :-).  If
you'd like to see it, I'll try to schedule some time to work it out, and if
I can't, I'll look it up again.

> I'm not sure what you mean by "primitive functions" (combinators are
> a kind of function);

The problem is the use of 'primitive'.  Let's skip this point for now; it's
become too confused to be discussed.

> > > The precise meaning of "combinator" may vary a bit among uses, but
> > > calling all functions "combinators" is getting a bit nonstandard.

> > This is practically true for applicative languages, but 
> > breaks down almost
> > completely for concatenative ones.  In such languages, ALL 
> > words (including
> > so-called combinators and special forms) are actually 
> > functions from a stack to a stack.  

> This is true (except for informal primitives like "put" and "get",
> which can't really be modelled as just functions from stacks 
> to stacks);

Actually, they _can_ be modelled that way.  The problem is that some other
things may be affected, and the accuracy of the modelling is less than it
would be with other words.  But in all cases the inaccuracies are clearly
delimited, affect only that one word, and can be easily dealt with
theoretically.

This is as opposed to words like @Input in your language, which cannot be
dealt with in the same way as functions returning values.

> but, usually by a "combinator" in Joy, I am just referring to
> things like "swap", "dup", "pop", and maybe "dip" and "concat"
> and some others, but not "+", "-", "1", "2", and such.

> This use of "combinator" with respect to Joy is a little vague;
> but, in case it is not clear, in an applicative system, by
> "combinator", I mean a function that takes some parameters
> (through currying), and results in some application
> of those parameters to each other (i.e., a "combination" of the
> parameters). A few examples of combinators:

In that case, 'dip' is the only combinator in the above list, since it's the
only one which actually results in any application.  I think another good
definition of 'combinator' is: a function which can take other functions as
parameters.  Thus + isn't a combinator, but swap or drop can be; dip always
is. This definition works with your language as well, although I'm happy to
agree that it doesn't need to.

>   - "f\f f": a function that takes a parameter "f" and 
>     results in "f f",
>     the application of "f" to itself. (This is sometimes called
>     the mockingbird combinator).
>   - "x\y\x": a function that takes two parameters "x" and "y" (through
>     currying) and results in "x". In other words, this combinator
>     takes a parameter of "x", and results in a function that
>     always returns "x". This is sometimes called the K combinator.
>   - "x\y\z\ xz(yyy)(zx)". This is a weird combinator with no common
>     name and seemingly little significance.

Cool.  So you also have a compact notation for your combinators (which you
could add to a variant of your language in the same sense that I've
considered adding the stack:() notation to mine).

> > In addition, according to the foundational axioms of
> > K-complexity theory, all functions can be built from a 
> > small set of specific
> > combinators (I seem to recall S and K) combinators, 

> It is true that the S and K combinators can be used alone to form all
> other combinators, and even a bit larger class functions which I call
> "pure functions", but to say that S and K alone can be used
> to form _all_ functions is getting a bit silly. For instance,
> S and K cannot be used to form the function of "fatherhood",
> which given a person, yields his father. They also cannot be
> used to form numeric functions like "+", "-", "*", and "/"
> in the ordinary sense (although they can be used to form similar
> functions that operate on Church Numerals or representations of
> numbers).

Of course they only operate on representations of numbers.  Doing anything
else is impossible.  They also only operate on representations of people and
paternity -- Darth Vader could not use a function to capture Luke, nor vice
versa (even if Luke had been willing to admit that Vader was his father).  I
don't understand your point.

> > so all functions can therefore be regarded as combinators in any
> > system (including yours).

> Not so. But, this is true in a certain sense... there are interesting
> ways in which pure functions can be used to emulate numbers,
> propositions, and lists. It is conceivable that pure 
> functions could be used to
> represent even other things like people, pictures, sounds, 
> and programs,
> but this would be a bit weird, and I don't plan to do it in my
> system.

Given the theory behind your system, I don't see how you have any choice.
You *are* using a finite number of things to represent all possible
concepts, surely an infinite set.  You _have_ to permit your user to use the
finite number of elements to represent whatever they want.

> > > But anyhow, that definitions and system dictionary were "not 
> > > special" was not my main point; my main point (which you have
> > > not contradicted) was that a special language construct is not
> > > needed for definitions.

> > I have contradicted that; although it's true that no 
> > special syntax is
> > needed (an obvious point proven many times over)

> It is obvious that you have misunderstood what I meant by
> "special language construct". By this, I meant a special
> syntax, like Joy's "==". Surely you would not contradict me and
> then immediately acknowledge that what I said was true.

I think you've forgotten what we were discussing here -- I was claiming that
the 'define' primitive formed a special category of primitives.

> > The fact that all of your combinators can be expressed in terms of a
> > few simple combinators doesn't change the fact that all 
> > combinators --
> > especially including those few ones -- require special parsing from
> > the compiler.

> I have admitted several times that I don't know of any techniques
> for efficiently compiling programs expressed in the style I've
> suggested (although I have hope that efficient techinques do
> exist and that I will discover them at some point). I do know
> of some inefficient techniques of interpreting them, anyway.

The question isn't whether your programs can be compiled into some
representation and then used.  I'm confident that we'll work that out.  My
statement was much simpler: each of your fundamental words will require
special _parsing_.  Each of those words will change the interpretation of
the words which follow them.  Thus, each of them has to be treated specially
by the parser, and the parser has to be written for them.

It's like Forth's distinction between words and numbers; numbers are handled
by the compiler, and they're a completely seperate case from words.

> There's one thing somewhat relevant to this that I may not have made
> clear yet... whereas many programming languages only provide
> ways of representing programs, languages of the type I've described
> will cleanly be able to represent other concepts, such as numbers,
> propositions, lists, functions, as well as programs. I think
> some enlightenment may result from comparing my language(s) to
> human languages like English, rather than to programming langauges.

You're 'accusing' other languages of faults which are your own constructs.
Your language is not the first 'declarative' programming language ever
developed; in fact, it's not even an especially good one, since its syntax
is so focussed on your style of maps.

You've also invented a concept of "true representation", in which your
language is gifted with the ability to truly represent things, while all
other languages falsely represent them.  Since you've never defined your
concept of representation, it's impossible to guess what this means to
anyone.

> > > > Speaking of immediate execution, how would your system 
> > > > handle Forth-like
> > > > metaprogramming?  Does the code have full access to all of 
> > > > the unparsed
> > > > source?

> > > Probably yes... As an example, if a user has been working 
> > > on writing
> > > a complicated program, he would probably have given a name to
> > > a field for the program's text. Conceivably, this name could
> > > occur within the text itself, so that the program could refer
> > > to its own text cleanly. 

> > This is not Forth-like; in fact, it's a superset of Forth's 
> > capabilities
> > which will force you to re-parse the file every time it's 
> > used.  That's
> > pretty brutal. 

> I do not see how it requires me to re-parse the file every time
> it is used. Perhaps you mean "every time it changes"; I don't 
> know...

Pronoun confusion.  Let me rephrase: "...which will force you to re-parse
the file every time it [the metaprogramming capabilities] are used."  Sorry.

> > interpreter hasn't touched yet.  I don't believe that this 
> > is even possible
> > with your system; all text has to be parsed before anything can be
> > evaluated.

> I don't have a system.

Yes, you do.  We're talking about it.

> Neither do I have my mind set on any particular
> technique for interpreting text that user's type in. Anyway, it
> doesn't really take very long to parse text, so this doesn't
> really seem to matter too much to me 

It's certainly much, much more than the 1*O(n) operation my language
requires.

Justification: each word in the input stream has to be lexed, then each word
has to be reduced to its internal representation (which is likely a matter
of a dictionary lookup and replacement with the appropriate number of "S"es,
"K"s, and other fundamentals).  Then you have to load the words into their
tree structure, reading left to right and performing pinhole optimization.
You use a stack to keep track of unresolved applications, since the most
recent application "@" you reached is always the appropriate place to attach
the next parameter (even if the parameter is itself an application).

Finally, you can optimize (a task which I'll assume takes zero time), and
then evaluate the result, using at least depth first traversal (assuming
that you only want to execute it as a program).

The biggest time-sink there is the depth-first traversal; it's n*log(n).
That's not as bad as I'd feared.  Of course, adding the possibility of
source modification we get a new worst case of n^2 log(n).  But that's silly
(hmm, it's not the silliest worst case -- it would be possible to not
terminate; THAT is silly ;-).

> (well, perhaps there might one
> time be a huge specification file; I suppose if a user changed
> one bit in such a huge file, it might be worth it if the system
> can find a way to reparse just a part of it without reparsing the
> whole thing; this is something I have not ruled out).

The problem with your metaprogramming suggestion is twofold:

 - it requires knowledge of the source file's name and total overall
structure, as well as the willingness to muck about with the original source
code.
 - it requires a complete re-parse and re-compilation after every change,
since it's possible to change any part of the program, including a part
which has already been executed.

You can see that metaprogramming is NOT encouraged under these
circumstances.

Note the difference from how Forth does it:

 - Forth only allows metaprogramming in the part which hasn't been parsed
yet.  This means that re-reading is never needed.
 - Forth allows modification relative to the current location in the source
file.
 - Forth's modifications have zero effect on the source itself; they don't
even affect the internal representation of the source.  Technically, all
they do is discard source and/or provide alternate source.

> > Your system doesn't currently have any
> > obvious ways to reach inside a completely applied function 
> > and modify it,
> > but you can play with the variables about to be passed to a 
> > function.  (In
> > this respect, Joy is clearly _much_ more expressive, since 
> > manipulating a
> > quoted program as a list allows arbitrary modification.)

> This is an interesting comment. I've thought using adding a
> "reification" primitive (as Fare calls it) which is a function
> that given a thing, yields a data-structure that represents 
> that thing;
> this would allow "arbitrary manipulation". I'm not sure that I would
> really need to use this though. (by the way, since there may 
> be several
> ways of expressing things in a language, one might make such a
> "reification" primitive as yielding a set of expressions, rather
> than a particular one; if one did this, one would probably also need
> a "pick" procedure that one could use to get the system to
> arbitrarily pick a member of a set; but... I need to clarify
> on how procedures will be dealt with)

That's a very interesting concept.  Yes, that would give your language as
much power as Joy has now, which is likely more than Forth has.  The problem
is that it adds a new set of structures to your language which were not
present before and are not present in Joy.  Forth uses metaprogramming to
make up for its lack of function modification, but a Forth programmer
doesn't have to learn a new language to use it, since the language being
modified is the same one he's coding in.  The language you're having your
programmers modify looks entirely different from the one they're coding in.

> - "iepos" (Brent Kerby)

-Billy