Joy, Lambdas, Combinators, Procedures
Mon, 28 Feb 2000 12:09:52 -0800
> From: firstname.lastname@example.org [mailto:email@example.com]
> this message is still way too long, so i'll skip through some things
> (I read them of course, but just do not have time to reply to them),
> but I'll at least try to address your direct questions.
I'll try to do the same trimming -- you're better at it than I, though.
> > > I still do not quite get exactly what you mean by
> > > "parameter counts".
> > For example, in your language, is this an error?
> > @@@ackermann 4 5 6
> > (Ackermann is a two-parameter function.)
> That would be an expression that would probably have no denotation
> under the language (although it is "well-formed", in a certain sense).
> Any expression involving this expression would also have no
> I guess you could say that it is an error. If you fed this
> expression (or an
> expression involving this expression) to an interpreter,
> it would probably complain that the expression does not denote
> a procedure (the interpreter can only execute expressions
> that denote procedures).
Yes, this is exactly what I mean. Your language has to, at some point,
count parameters before execution. This is a natural extension of the fact
that your language is "all about" binding parameters to functions.
It is interesting, though, that your language can express "overbound"
functions. I don't understand what that could mean to the function -- do
you have any ideas? If we can come up with a theoretical functional
meaning, perhaps we can make sense of them procedurally as well.
In general, currying takes a function of n variables and a parameter, and
returns a function of n-1 variables. I don't see how you can get away with
disregarding the terminating condition: when n=0, currying is impossible.
I'd like to hear a method, though; perhaps it's reasonable to simply ignore
those extra variables.
That _does_ make sense, pragmatically. It's an irritation when coding,
though, since you couldn't tag a large class of simple errors. Perhaps
counting parameters is the job of the correctness checker, not the executor?
Correctness checking is very complicated anyhow, and adding this tiny bit of
complication to it won't upset it.
I would be interested in hearing any other solutions you have, though.
> > Look up K-complexity on the web -- it's the theory of
> > reducing functions to
> > just a few combinators. (Your compiler could theoretically
> > work that way.)
> > The problem is that it's a very hard task;
> Hmm... I'm not familiar with "K-complexity", but this is incorrect,
> I think. It is really quite an easy task to rewrite any combinator
You're right; I was wrong. It's not a trivial task, and it expands rapidly;
but in its naive form it's linear. Optimization is where it gets hard, but
optimization is obviously a difficult task for concatenative languages as
> Well, I guess usually "imperative" is used to describe systems
> in which the "command" or "procedure" play a fundamental role,
> whereas "declarative" is used to describe systems where the
> "statement" (expression of information) plays a fundamental role.
Hmm. Not really, although that's a possible expression of it.
A better way to express the difference is to point out that imperative
languages express sequences of commands, each of which has some kind of
effect on its environment. Declarative languages express -- or pretend to
express -- unchanging facts about reality.
Joy contains a declarative language: its definitions are an imitation of the
notation the author uses for expressing rules, and contain restrictions
which allow the compiler to check them for inconsistencies. Your language
tries to be declarative throughout, but it turns out that it can't express
any action without procedures.
> If this is what you meant, then I guess I would say that
> my system is either both "imperative" and "declarative" or
> neither one; procedures will play an important role in the system,
> and statements will also (when I begin working on the logic
> part of the system), but neither the "procedure" or "statement"
> has extra-special status in the system.
There are two primary components of your system: functions, which are
declarative; and procedures, which are imperative. I'll mention that in the
> Actually, it has been a thought in my head that the distinction
> between "statements" and "commands" is somewhat artificial...
> Commands can be re-expressed as statements, and vice versa.
This distinction you're making seems to be valid, but it seems to me to
contradict everything you've said in the past about your system versus mine.
For example, you state that your language can actually represent the number
3, while my system can only imitate 3 by means of a procedure. "Actual
representation" is the goal of a declarative language; in such a language
you want to make it seem like the author of a program is merely expressing
immortal and immutable truths to the computer to help it understand the
problem (the problem is also treated as an immortal truth).
To help carry out this illusion, declarative languages usually do
consistency checks to make sure that the author doesn't attempt to
"contradict" himself, and usually disallow redeclaration or changing of
> Similarly, the statement,
> "Two plus two equals four."
> could be expressed as a command as
> "Note that two plus two equals four."
Right. Since computers only understand commands, this ability to translate
from declaration to command is what makes declarative languages possible.
However, not every command can be expressed as a valid declaration, so a
declarative language has to check the declarations its user makes to ensure
that he's not causing a (trivial) contradiction.
Here's an imperative sequence:
x = 1
x = 2
This is a contradiction to a declarative language. To make this valid, you
have to use lexical scopes:
x = 1
x = 2
Now the inner x is different from the outer x.
The problem is that objects are, almost by definition, mutable. An object,
once declared, can change. Consider the dishwashing example we gave:
x\wash x.rinse x.dry x
In that example, lexically x is constant, but each step of the process
changes the thing x holds, thus making the original declaration false. This
is why I argued with your implementation, preferring instead my own:
x\dry (rinse (wash x))
@dry @rinse @wash
This is still clearly declarative, but now we're not pretending to keep a
constant 'x'. We instead have a theoretically different object which is
returned from each function. Now, as an implementation detail, most
languages actually keep the same object and simply modify it; but that's
merely a convenient optimization. (Note that that optimization would be
wrong for a declarative language if the function used 'x' somewhere else in
the same lexical scope.)
Since Joy makes all parameters implicit, it's impossible to express this
type of contradication -- so even the ugliest, most imperative Joy code is
closer to declarative code than all but the cleanest lambda expressions.
Your language, thanks to its support for combinators, shares this advantage
with Joy. The only difference -- not a bad one -- is that in your language
parameters are explicitly tied to their functions.
> I have indeed gotten away without having special parsing support for
> combinators (although this doesn't seem like an amazing feat
> to me). Here is an excerpt from the parser of my system:
This was what I needed. Thanks!
> [sudden realization that I could have just used strchr()]...
> This is a messy way of reading the word; words are terminated
> by "." in the system.
Right. You also used three-character abbreviations for global variables.
Double icky. You do know that one of the regrets of the inventors of Unix
is "I wish we'd spelled CREAT as CREATE." What suffering are you saving
yourself by naming a variable "siz" instead of "size"? And why are you
calling it "size", when it has nothing to do with a size?
> Anyway, my point is that the parser
> does not specially mention any combinators; combinators are
> expressed using just ordinary words. Of course, certain
> combinators are mentioned specially in other parts of the system.
A point well-made.
> > remember that). Your system will require:
> > - atomic functions and types (assuming that you don't go
> > "whole horse" and
> > just define everything in terms of the basic combinators,
> > which is certainly
> > possible but VERY expensive)
> types... types are interesting things that I've been thinking about
> lately... my current system is rather untyped.
Not quite -- your current system relies on atomic types. You don't have
type checking on your functions, but you DO use atomic types. I can tell
because you're using 32-bit ints to represent numbers. If you had been
using just elementary combinators to represent numbers, THEN you wouldn't
need atomic types.
> The main ability
> that one would associate with "typed systems", I suppose, is the
> ability to arbitrarily construct restricted functions, i.e.
> functions that only meaningful when applied to a certain class
> of parameters.
Type-restriction is orthogonal to atomic types.
> > > 1) In a purely-applicative system of the kind I've described,
> > > it is always possible to replace a sub-expression with
> > > one of equivalent meaning, while this is not the case
> > > in the Joy system, because of Joy's heavy dependence on
> > > a quotation construct. To illustrate, suppose in Joy one
> > > had a program
> > Not true -- as you admit below, the quotation construct has
> > little to do
> > with it. The problem is caused by being able to treat
> > quotations as lists.
> > If that were not possible, or if it were restricted in
> > certain ways, there
> > would be no problem even in this circumstance.
> Right, it looks like you're not denying that there is this problem
> with Joy...
If you put it that way, I do deny that. I am claiming that there's a
problem with certain aspects of code self-modification, and I'm claiming
that this is a problem inherent to self-modification.
> you're just denying that it stems from the quotation
> construct... the solution you propose (restricting the ability
> to treat quotations as lists) is one that the author of Joy has
> considered; it has the disadvantage that you would then need
> to have separate constructs for lists (if you still want them).
> But, as far as I can see, it does look like a remedy that could work.
I don't think it's a good idea, though. A better idea would be to define
any function which breaks under that type of treatment as "broken". For
example, suppose we have a function which takes as input a number and a
quotation; the quotation is supposed to represent an augmentation of that
number. It produces as output a function/quotation which returns the
augmented number. Some examples:
2 [2 +] augment i == 4;
2 [3 *] augment i == 6;
First, I'll define a "broken" function which handles this.
broken_augment == uncons cons cons;
The error here is obvious: it assumes that there are two components to the
augmentation function, a number and a function. This breaks down in the
2 [0 +] broken_augment == ???;
A correct augment function is:
augment == cons;
The fundamental error in the broken definition is that it fails to deal with
the end-case, when the length of the quotation is zero. (It's merely
coincidence that the repair is so trivial; in general, dealing with endcases
I claim that any function which fails to deal with the endcases will be
broken; every function which does deal with the endcases (as well as the
normal cases) will not be broken (in this sense), and will therefore handle
optimization within its parameters correctly.
> > > foo (plus 0)
> > > could be freely rewritten as
> > > foo I
> > > regardless of the structure of "foo", if the system had
> > > realized that the expression "plus 0" means the same
> > > thing as "I".
> > And the exactly analogous expression "0 plus foo" could be
> > freely rewritten
> > as "foo" in Joy.
> Let me explain why I think this way... Suppose instead that
> we have "WB (plus 1)"... "WB" is a higher-order function that
> takes a function, in this case "plus 1" (the add-one function),
> and yields the function's composition of itself; in this case,
> that will be "plus 2".
> Now, could you see that a Joy analogue of "WB (plus 1)" would
> be "[1 plus] dotwice", where "dotwice" is a combinator that
> takes a quoted program and does that program twice
> (such a "dotwice" could be formed as "dup [i] dip i", I think).
That would be bad -- try instead "dup concat", since you want to return a
> In contrast, I cannot think of any "foo" such that "WB (plus 1)"
> would be analagous to "1 plus foo" of Joy.
> > Entirely correct. It _does_ cause an impediment; the
> > interesting thing is
> > that it itself (treating programs as lists) is a form of
> > automated reasoning
> > about programs. This suggests that the problem it causes _might_ be
> > fundamental, one of the classic Godel incompleteness
> > problems (as phrased
> > for concatenative languages, of course). In other words,
> > reasoning about
> > programs may be easy; but reasoning about programs which
> > are themselves
> > rasoning about programs is not easy.
> This is an interesting thought... But, I'm not sure that this
> (the problem of not being able to substitute equivalent expressions
> within quotations) is really a fundamental problem; my purely
> applicative system does not seem to have it, and you've given
> a remedy that could possibly rid a Joy-style system of it.
Fundamentally wrong. Read my paragraph again: I'm not claiming that Joy has
a fundamental problem; I'm actually claiming that _reasoning about programs
is fundamentally difficult_. Your applicative system does not manifest this
difficulty because _it is impossible to use your system to reason about
programs_. As soon as you add reasoning capability your program will have
to face the exact same problem -- and the same solution I outlined will work
for you, and any solution you come up with will also work for me.
> > > 2) Many fundamental equivalences of Joy programs that one
> > > might like to show require restrictions, while
> > > similar equivalences of purely-applicative expressions
> > > can be taken unrestrained. Specifically, one might
> > > like to think upon seeing a Joy program of the form
> > This is _really_ heavily unfair. Those things aren't "fundamental
> > equivalences"; they're rewriting rules. Rewriting rules of
> > that nature are
> > a thing used by syntax-based languages, and you honestly
> > can't expect them
> > to apply to a non-syntax based one. The fact that some of
> > them do anyhow is
> > almost amazing.
> Hmm... I'm not sure what you mean by a "non-syntax based" langauge.
> All languages (including Joy) have a syntax, do they not?
Most Forthers would disagree with you. Quite simply, no parsing done by the
interpreter in Forth can ever affect the interpretation of anything outside
of the current word. Immediate words, of course, add another complication
to this; but this makes _those_words_ syntax-based, not the language itself.
> From this comment, I guess I can see that you just do not
> value too highly the ability to do "textual reasoning" in
> a language (i.e., reasoning which is achieved by changing expressions
> around mechanically, but without paying special attention
> to their meaning).
> Yet, textual reasoning (or "formal reasoning" as it is usually
> called) is something that I find quite interesting; it seems to me
> to be a very good quality of a language if it can be easily
> textually-reasoned upon.
Formal reasoning is VERY powerful, and I used to imagine that it was
difficult with Forth. At the same time, though, I found Forth very easy to
work with; I never realized until I read the discussion of Joy that I had
been applying a different type of formal reasoning to my Forth programs.
Joy, for me, is not so much a language as it is an explanation of a system
of formal reasoning which works for Forth.
THIS is why I like Joy so much -- it takes an existing language, one which
has already proven its worth (and its limitations), and it explains why it's
so good -- and what can be done to fix it.
Your language, OTOH, takes existing theory -- which is of dubious use, and
is only around because it loosely models existing programming languages --
and attempts to build a programming language on it.
Of course, if I were to say that Joy was any better than your language I'd
be wrong. It's also a wild speculation, and odds are just about certain
that it'll never be of any real use. The real value I'm looking at is the
possibility of building a child of Forth which is just as useful as Forth,
and just as easy to formally reason about as Joy.
> (this holds for all well-formed Joy expressions "x", "y",
> "a", and "b",
> and by "x is replacable by y" I mean "x can be textually replaced by
> y in any program and the resulting program will make the system do
> the same thing as before").
Yes. Much better ;-).
> > > The analagous equivalence in a purely-applicative system
> > > is that
> > > @@@C f x y
> > > can be rewritten as
> > > @@f y x
> > > This can be accepted without restraint for all expressions
> > > "f", "x", and "y" of the system.
> > That's because combinators in your system have syntactic
> > effect. In order to do that, I'd have to define:
> > C == dip [swap] i;
> > [y] [x] [f] C == [x] [y] f
> Hmm... I suppose you meant "C == [swap] dip i"...
Wow. ;-) Yes.
> but, yes I suppose you could then accept "[y] [x] [f] C == [x] [y] f"
> unrestrained. But, I don't think this principle is analagous
> to the "C f x y = f y x", because the former would almost never
> be used at all in a Joy system at all, while the latter would be
> very commonly used in my system.
Correct. It's nevertheless exactly analogous, under the same reasoning you
so aptly used to demonstrate that [1 plus] foo was the Joy equivalent of
your foo (plus 1). In both cases, Joy would rarely use the expression,
while your language would use it constantly -- not a bad feature of either.
> On the other hand, a reasoning
> system in Joy would probably commonly find itself reasoning
> that "x y swap" is the same as "y x", but it could only realize
> this once it verified that "x" and "y" were simple programs that
> just pushed a thing onto the stack with no other effect;
> this might not be such an awful problem, since a reasoning system
> would probably want for other purposes information on how many
> items a program might push onto the stack.
Your error here is assuming that the meaning of "x y" is dependant on
"swap". In a concatenative language it simply isn't: "x y" may be rewritten
as "y x" if x and y are commutative under composition. "x y swap" may be
rewritten as "y x" if x and y are commutatative under composition, except
that the 2 TOS items after "x y" are reversed. And so on.
The important thing is to realize that
_function_composition_equals_function_concatenation_ is the only rule in the
language. If you invent a rule which disregards that fact, as this one
does, you're going to have to place lots of qualifications on it. If you
only use reasoning which accepts this fact, your reasoning process will be
clear and clean.
> > I also suspect that even the strength of your functional language,
> > functions, will be very hard to read and maintain without
> > automated help.
> > APL functions don't use many combinators, but APL
> > programmers recommend
> > deleting the program and rewriting it based on the comments
> > rather than
> > trying to understand the program. Forth programmers don't do that.
> confession... in working in my CLIP system, on several occasions I've
> found myself deleting lines and rewriting them after they didn't
> work on the first try, rather than trying to understand them :-)
> but, i think this reflects my own unfamiliarity with the system
> and also the inconvenience of the system's current syntax.
APL experience indicates that it may be a fundamental fact rather than a
My suggestion is to design your language to make that type of thing even
easier than it is now. That way you can borrow the slogan used by at least
one APLer: "APL: if you can say it, it's done." APL, in my limited
experience, is a real pleasure to work with.
> - iepos ("Brent Kerby")