Joy, Lambdas, Combinators, Procedures
Tue, 29 Feb 2000 16:19:30 -0800 (PST)
> > > For example, in your language, is this an error?
> > > @@@ackermann 4 5 6
> > > (Ackermann is a two-parameter function.)
> > That would be an expression that would probably have no denotation
> > under the language (although it is "well-formed", in a certain sense).
> > Any expression involving this expression would also have no
> > denotation.
> > I guess you could say that it is an error. If you fed this
> > expression (or an
> > expression involving this expression) to an interpreter,
> > it would probably complain that the expression does not denote
> > a procedure (the interpreter can only execute expressions
> > that denote procedures).
> Yes, this is exactly what I mean. Your language has to, at some point,
> count parameters before execution. This is a natural extension of the fact
> that your language is "all about" binding parameters to functions.
> It is interesting, though, that your language can express "overbound"
> functions. I don't understand what that could mean to the function -- do
> you have any ideas? If we can come up with a theoretical functional
> meaning, perhaps we can make sense of them procedurally as well.
> In general, currying takes a function of n variables and a parameter, and
> returns a function of n-1 variables. I don't see how you can get away with
> disregarding the terminating condition: when n=0, currying is impossible.
In general, this is true (by "currying", I suppose you mean just
"application")... But, this does not really hold in all cases,
when higher-order functions are involved. For instance, the "B"
combinator may be thought of as a three-parameter function;
thus, you'd suppose that when you applied it to a parameter, you'd
get a two-parameter function. But, if you apply "B" to, say, "B" itself,
you'll end up with a four-parameter function (although it could also
be thought of as a two-parameter function that yields a two-parameter
function, or a one-parameter one that yields a three-parameter one):
B B f x y z
B (f x) y z [B]
f x (y z) [B]
In general, when you apply "B" to a function of order "n" (meaning
a function that takes "n" parameters), you end up with a function of
order "n + 1".
> I'd like to hear a method, though;
This reminds me of an odd hole that I realized recently that my
system has. In my more recent post, I described a technique that I thought
could be used to translate Joy programs into procedures of my
system; I realize now that there is a flaw in the technique
so that it would not work in certain cases (actually, I consider
more a flaw of my system than the technique):
In Joy, when a program is executed and there are "too many" items
on the stack, there is no problem; the procedure just doesn't use
the extra items; it saves them on the stack for future programs.
Now... if we shift into my system, which uses continuations instead of
a stack, there is something similar... The "C" combinator could be
thought of as a function that takes a continuation function "f" plus
two more parameters "x" and "y", yielding "f y x". In "procedure"
terminology, we might call "C" a procedure that takes two parameters
and has two results...
Now, if "C" is applied to "too many" things, you may still end up
with something meaningful... For instance, let "foo" be some
procedure that takes 4 parameters (plus a continuation parameter,
which comes first). In my system, "foo stop a b c d" might be a meaningful
command to the system. Now, what I'm getting to is that
"C (foo stop) a b c d" would then also be meaningful, even though
"C" is applied to more than three parameters; it is the same as
"foo stop b a c d".
In general, a combinatory reduction in the system can take place
in any part of an expression. For example, "(C f x y) a b c" reduces
to "(f y x) a b c" or just "f y x a b c". Similarly,
"f (C g x y)" can be reduced to "f (g y x)". This idea can be
summarized by the so-called "u" and "v" rules:
u: If "x -> y", then "f x -> f y"
v: If "x -> y", then "x f -> y f"
Hmm... I feel like I've gone off on a tangent, although that seemed
relevant... Back to what I was originally going to describe, the
analogue of what happens when there are "too many things on the stack"...
I had considered, for example, "Output" to be a function that
takes two parameters, a continuation, and a string to be outputted,
and yielding another continuation. Thus, I might say that
"Output stop 'hello'" is a continuation of the system, and that the
system might be able to execute it. However, what would we say
about "Output stop 'hello' 'there'"? In this case, we have applied
the continuation "Output stop 'hello'" to yet another parameter
"'hello'"... I had originally considered that this expression would
just be jibberish, since continuations were not functions, and
did not make sense when applied to parameters.
But, I think that would be a bad quality of my system. To make my
system analagous to Joy, "Output" would need to forward any extra
parameters it has to the next continuation... Thus,
"Output foo 'hello' 'there'" would be a program that outputted "hello"
and then went on to do "foo 'there'", the extra parameter "'there'"
having been forwarded to the continuation "foo". Thus, a program
to output "hello" and then "there" could be written as:
Output (Output stop) 'hello' 'there'
Or, in a more Joy-like way...
T 'there' . T 'hello' . Output . Output
Actually, this second is a bit different from the first one... The
first is a continuation that outputs 'hello' and then 'there' and
then "stop"s. The second is a bit more abstract; it is a function...
if you applied it to "stop" you would get the first one... But,
you could apply to some other continuation... Normally in my system
one would not make explicit references to "stop", so the second
is more like what one might actually use.
This example has brought to my mind that "T" could be interesting
to use in an abstraction algorithm... I'm still looking for
interesting abstraction algorithms other than the usual "IBCSK".
Especially if it leads to interesting algorithms for strong reduction...
> Perhaps it's reasonable to simply ignore those extra variables.
> That _does_ make sense, pragmatically.
> It's an irritation when coding,
> though, since you couldn't tag a large class of simple errors.
Hmm... Well it wouldn't make much sense to me if the system
treated "ackermann 4 5 6" as if it was just the same as
"ackermann 4 5"... (not sure if this is what you meant).
> Perhaps counting parameters is the job of the correctness checker, not
> the executor? Correctness checking is very complicated anyhow, and
> adding this tiny bit of complication to it won't upset it.
Yes... I was planning on expressions being checked for at least a
minimum kind of correctness, the correctness of denoting a procedure,
before they were executed... Any expression involving "ackermann 4 5 6"
could not pass this test. There are other kinds of correctness
which one might also want to show that an expression has... for
instance, one might want to show that it denotes a terminating
procedure, or one might want to show that it is equivalent to some
other expression. These kinds of correctness are much more difficult
to show; they would probably be handled by the logic system
(actually, I'm not really sure what would separate the "logic system"
from the rest of the system... except that I was planning on
implementing the logic system not in C, but in some more pure
language on top of the base system).
> [expressing all combinators from a few primitives]
> It's not a trivial task, and it expands rapidly;
> but in its naive form it's linear. Optimization is where it gets hard
Yes, optimization is usually a hard thing... Actually, in some cases,
it might be hard to say which expression of a combinator is more
"optimal" than another. I guess you could go by the size of the
expression; if so, then "optimization" is quite a hard problem,
as far as I know... There are often clever ways of writing
combinators using shorter expressions than what that simple
> > Well, I guess usually "imperative" is used to describe systems
> > in which the "command" or "procedure" play a fundamental role,
> > whereas "declarative" is used to describe systems where the
> > "statement" (expression of information) plays a fundamental role.
> Hmm. Not really, although that's a possible expression of it.
> A better way to express the difference is to point out that imperative
> languages express sequences of commands, each of which has some kind of
> effect on its environment. Declarative languages express -- or pretend to
> express -- unchanging facts about reality.
> Joy contains a declarative language: its definitions are an imitation of the
> notation the author uses for expressing rules, and contain restrictions
> which allow the compiler to check them for inconsistencies. Your language
> tries to be declarative throughout, but it turns out that it can't express
> any action without procedures.
Neither being "declarative" or "imperative" is one of my goals for
the system, at least not now... Being "declarative" may have been
a goal, before, but I don't really care now. If I had to summarize
what my goals for the system are, I might say that I want the
system to be
These are rather broad goals, of course. I also have some other, more
specific, goals that are means to achieve these (although
"highly-performant" is way on the back-burner)... For instance,
the goal of making the system more-or-less purely applicative
and based on combinators...
Of course, I don't claim that making the system purely applicative
is the only way to achieve great flexibility... Joy's concatenative approach
may also be adequate; on the other hand, there are some things one
can do in my system that I'm not sure one can do directly in Joy
(for instance, the use of "M" to get the current continuation).
> > Actually, it has been a thought in my head that the distinction
> > between "statements" and "commands" is somewhat artificial...
> > Commands can be re-expressed as statements, and vice versa.
> This distinction you're making seems to be valid, but it seems to me to
> contradict everything you've said in the past about your system versus mine.
> For example, you state that your language can actually represent the number
> 3, while my system can only imitate 3 by means of a procedure.
That statement I said was silly, and I would not repeat it
(although it was true, in a sense).
> "Actual representation" is the goal of a declarative language; in
> such a language you want to make it seem like the author of a program
> is merely expressing immortal and immutable truths to the computer to
> help it understand the problem (the problem is also treated as an
> immortal truth).
It seems clear that you do not like "declarative" languages.
I do not really value the approach, although I do not dislike
it either. Of course, there might be some specific "declarative"
systems that I value more than some specific "imperative" systems
or vice versa, but I'd really probably need to look at specific
systems to pass judgement.
> To help carry out this illusion, declarative languages usually do
> consistency checks to make sure that the author doesn't attempt to
> "contradict" himself,
Ahh ... consistency checks... I'm guessing that a consistency check
is something that is truely valuable; a powerful imperative system will
have its analogue.
> and usually disallow redeclaration or changing of data.
Ahh ... Yes, I suppose many declarative systems have little or no
support for Fields ("field" is my name for an abstract storage spot,
which can hold one value at a time, but may hold different
values at different times); this would be unfortunate, because fields
are really quite useful... On the other hand, many "imperative" systems
place very heavy emphasis on fields, which is also unfortunate;
the heavy emphasis on fields (even when they are not needed) is
probably what creates the problems of provability with "imperative"
Anyway, I plan on having support in my system for fields
(in fact, I already do; there is a "new" which yields a new
field filled with a specified value, and a "get" and a "set"
which can be used to get and set it; fields do not need to be
assigned names). Interestingly, Joy does not seem to have support
for fields, except for the stack, which could be considered a
sort of field. If Joy were going to be used as a practical system,
I think it would need to be extended with fields...
> x = 1
> x = 2
> This is a contradiction to a declarative language. To make this valid, you
> have to use lexical scopes:
> x = 1
> x = 2
> Now the inner x is different from the outer x.
Hmm... That did not make sense to me. If "x" is a name the user uses for
a particular number, then the original verdict (that "x = 1" and
then "x = 2" causes a contradiction) seems natural to me;
if I had told that system both that "x = 1" and "x = 2" I would
want it to complain about a contradiction.
On the other hand, looking at the example, I'm guessing that the
user intended to use "x" more as a field for a number than as
just a number; if so, then the user could have correctly said
(this is Joy-style code):
newfield "x" define
1 x set
2 x set
Where "newfield" is a program that pushes a new field onto the stack,
and "define" is a program that takes a string literal and a thing
to bind it to off the stack... Then, "set" would be program that takes
a value and a field to set with that value.
On the other hand, perhaps the user did intend to have "x" represent
a number rather than a field for a number. Perhaps after he typed
"x = 1" he realized that he defined "x" to the wrong number,
and wanted to correct it so that "x" was defined to be "2". In
the system I'm thinking of, he could have correctly expressed
this as (again, Joy-style):
1 "x" define
2 "x" redefine
1 "x" define
2 "x" define
> The problem is that objects are, almost by definition, mutable. An object,
> once declared, can change.
> Consider the dishwashing example we gave:
> x\wash x.rinse x.dry x
> In that example, lexically x is constant, but each step of the process
> changes the thing x holds, thus making the original declaration false.
I'm not sure why you don't like my way of doing these dishes...
Suppose I applied the expression "x\ wash x. rinse x. dry x"
to my favorite plate which I call "Pierre" will; the result is
(by beta-reduction) the procedure "wash Pierre . rinse Pierre . dry Pierre".
This would be quite a meaningful procedure; my system is capable of
identifying Pierre, whether he is wet or dry, dirty or clean.
As a more practical example, suppose I am on a computer system
and want to play with the lights on my keyboard... The lights
are called "Num", "Cap", and "Scro". It would make perfect sense
to me for the system to have procedures "Turnon" and "Turnoff"
which take a light off the stack and have no result, having the effect of
turning the light on or off. In Joy syntax, I could say,
Num Turnon Cap Turnon Scro Turnon Num Turnoff
This would turn all three lights on but then turn Num off at the end.
This seems like a natural way for the system to work... In contrast,
it would be quite a mess if "Turnon" and "Turnoff" resulted
in the "new light", which I had to keep track of all the time.
Does it not seem natural to use the same name "Num" all the time
for the numlock light on your keyboard, whether it is on or off?
> This is why I argued with your implementation, preferring instead my own:
> x\dry (rinse (wash x))
> @dry @rinse @wash
By the way, "@dry @rinse @wash" is not quite right... It is not
a well-formed expression... What you want, I think, is
dry . rinse . wash
Or in other words,
@@B dry @@B rinse wash
> > [sudden realization that I could have just used strchr()]...
> > This is a messy way of reading the word; words are terminated
> > by "." in the system.
> Right. You also used three-character abbreviations for global variables.
> Double icky. You do know that one of the regrets of the inventors of Unix
> is "I wish we'd spelled CREAT as CREATE." What suffering are you saving
> yourself by naming a variable "siz" instead of "size"? And why are you
> calling it "size", when it has nothing to do with a size?
Heh heh... Oh, before I wrote the part of the parser which reads
expressions, I wrote the part of the parser which writes expressions.
The parser kept track of the size of the string so far as "siz", which
is a natural enough name to me. When I wrote the part of the parser
which reads expressions, it seemed natural to use the same global
variable "siz" instead of wasting space on a another one.
> > types... types are interesting things that I've been thinking about
> > lately... my current system is rather untyped.
> Not quite -- your current system relies on atomic types. You don't have
> type checking on your functions, but you DO use atomic types.
That's true. In fact, my expressions have a tag at the beginning of
them indicating their format; there is T_APP which indicates it is
followed by two pointers, and then T_INT which indicates it is followed
by a 32-bit int, and then T_FIELD which indicates it is followed
by a pointer to a storage spot (itself containing a pointer which
from time to time points to various things)...
> I can tell because you're using 32-bit ints to represent numbers.
> > you're just denying that it stems from the quotation
> > construct... the solution you propose (restricting the ability
> > to treat quotations as lists) is one that the author of Joy has
> > considered; it has the disadvantage that you would then need
> > to have separate constructs for lists (if you still want them).
> > But, as far as I can see, it does look like a remedy that could work.
> [example with "augment"]
I'm not sure I quite followed that...
It looks like maybe you're classifying expressions that involve
list-breaking-down programs (such as 'uncons') as "broken".
But, I'm not sure I see how this is connected with substitution
(of equivalent programs) and how it needs to be restricted within
> > Now, could you see that a Joy analogue of "WB (plus 1)" would
> > be "[1 plus] dotwice", where "dotwice" is a combinator that
> > takes a quoted program and does that program twice
> > (such a "dotwice" could be formed as "dup [i] dip i", I think).
> That would be bad -- try instead "dup concat", since you want to return a
Ah, yes, "dup concat i" would work (or, as you said, just "dup concat"
if you want the resulting program to be quoted rather than executed
> I'm actually claiming that _reasoning about programs
> is fundamentally difficult_.
Okay, I'll buy that... In general, _reasoning_ is fundamentally difficult.
> Your applicative system does not manifest this
> difficulty because _it is impossible to use your system to reason about
In a certain sense, my system cannot reason about programs, or
about anything at all, as I have not yet constructed a reasoning
system; in this sense, Joy is not really capable of reasoning
about programs either, in that it does not come with a reasoning
Yet, in a different sense, both my system and Joy _can_ reason about
programs. The framework is there; in both systems, programs can
be manipulated in a first-class way... A reasoning system just needs
to be constructed on top of them.
But actually, I don't think one could construct a good reasoning
system on top of my current system. Rather, I need to enhance my
system with a Reification primitive. So, what you said was right...
> As soon as you add reasoning capability your program will have
> to face the exact same problem --
I still have to think a bit about how I might make the reification
primitive work. Perhaps when I add it, there will be problems,
as you suspect...
The reification primitive will probably take the form of a procedure
which takes an arbitrary thing of the system as a parameter,
and which results in a data-structure representing that thing.
In the data-structure, I might use, say, numbers to represent
the primitives, and pairs (by "pair", meaning a procedure that
pushes two things onto the stack, i.e., to the continuation)
to represent applications.
Anyhow, a reification primitive would really be quite a low-level
primitive; in a finished system, users would probably not need
to refer to it explicitly (drawing instead upon the many bounties
offered by the logic system that was constructed using it);
nevertheless, there might be some situations (other than
in the construction of the logic system) where a sort of
reification primitive is needed; for instance, the "stringify"
primitive could be thought of as a sort of reification primitive
(although it is not quite the same as the low-level one I've
been talking about); there would certainly be situations where a user
would want to use "stringify".
> and the same solution I outlined will work
> for you, and any solution you come up with will also work for me.
I guess Joy already sort of uses the solution of having reification...
The quotation construct is sort of a reification construct.
> THIS is why I like Joy so much -- it takes an existing language, one which
> has already proven its worth (and its limitations), and it explains why it's
> so good -- and what can be done to fix it.
Yes, Joy is indeed a great system...
> Your language, OTOH, takes existing theory -- which is of dubious use, and
> is only around because it loosely models existing programming languages --
> and attempts to build a programming language on it.
This is a strange statement... If I understand, the theory of
Combinators and of purely applicative systems were pioneered
long before modern programming languages (in particular,
by Schonfinkel, Haskell Curry, and others in the 1920s and 1930s).
But, what you said was correct in the sense that it is sort of
an odd thing to apply Combinators or purely applicative systems
to a programming system, since this was not their original
use (they were originally used as devices for constructing formal
logics to study mathematics).
> Of course, if I were to say that Joy was any better than your language I'd
> be wrong. It's also a wild speculation, and odds are just about certain
> that it'll never be of any real use. The real value I'm looking at is the
> possibility of building a child of Forth which is just as useful as Forth,
> and just as easy to formally reason about as Joy.
Ahh... So you are also looking at constructing your own system...
> > confession... in working in my CLIP system, on several occasions I've
> > found myself deleting lines and rewriting them after they didn't
> > work on the first try, rather than trying to understand them :-)
> > but, i think this reflects my own unfamiliarity with the system
> > and also the inconvenience of the system's current syntax.
> APL experience indicates that it may be a fundamental fact rather than a
> temporary inconvenience.
I doubt it... I guess I'll have to see, though...
> That way you can borrow the slogan used by at least
> one APLer: "APL: if you can say it, it's done."
I suspect that instead I'll adopt this slogan:
"CLIP: You can say it, but can the system do it?"
It might be considered a problem of my system that it is possible
to refer (in a valid way) to a procedure that the system is
capable of doing, but yet the system not be able to do it.
That is, for instance, one might tell the system to do
(this is pseudo-code)
ifte (C.BC.C = BC.C.BC) (Output 'yes') (Output 'no')
This expression denotes the procedure of outputting 'yes'
(which the system is perfectly capable of doing), but
the system might not be able to figure that out, because to
do so it would probably have to figure out that "C.BC.C = BC.C.BC"
denotes the truth proposition; it may or may not be able to do this
(I would hope that in a finished system, it could) ...
This problem could become somewhat serious when one wants to
distribute programs; a program might work on one person's machine
but not on another... A possible solution would be to formalize
a subset of the language which all decent systems could decipher;
then, before distributing a program, one would "compile" it into
this restricted langauge; this compilation would basically involve
the insertion of (a piece of) the system's logic system into
the generated program...
> > - iepos ("Brent Kerby")
- iepos ("Brent Kerby")