Joy, Lambdas, Combinators, Procedures

btanksley@hifn.com btanksley@hifn.com
Sun, 6 Feb 2000 22:14:24 -0800


Subject: RE: Joy, Lambdas, Combinators, Procedures

> From: iepos@tunes.org [mailto:iepos@tunes.org]

> > Let me make clear my bias against a purely applicative 
> > language with a single disgusting analogy:

> I agree that a "simplification" to a language's syntax is of little
> use if it is artificial, making the language more difficult to
> interpret. An example of this kind of simplification might be
> helpful:

Yes, it was.  Thank you.

> Now, my view is that the simplification of a language to a
> purely applicative one is not like the simplification
> in this last example. I think that reducing a system
> to a purely applicative one may be truely useful (some
> reasons will be given at the end of the post). 

Convincing, too.  Thank you for taking the time to explain; I am much
enlightened.

> Still, it is not my view that non-purely-applicative systems
> are innately inferior... there may be other well-designed
> approaches that are also interesting (such as Joy's approach).
> My point is only that purely applicative languages are
> worth considering seriously;

A worthy point; my point is that for 20 or 30 years we've been considering
nothing but applicative languages.  To me, the idea that another applicative
language might solve the problems we've had with applicative languages in
the past seems kind of odd, perhaps even naive.

The interesting thing to me about your language to me isn't really its
applicativeness; other languages are at least as applicative.  The
interesting thing is the strong presence of combinators.  Perhaps your
language would be better called a combinative language -- most people, when
discussing languages, assume applicative and lambda-based.  Your language is
applicative and combinator-based; thus you say in one word 'combinator' the
essence of your difference from most modern language designs.  But I won't
mention that again unless you ask about it; you're entitled to you choice of
names, so long as they're reasonably accurate (and yours is).

> actually, the best way I
> know of to found a system is with a purely applicative language.

It may be that purity will have a value which is not found in the impure
versions, but which we've been searching for for the past 20 years.  I find
it hard to believe; we're not the smartest people to come down the pike
since Socrates ;-).  Nonetheless, I'm less inclined to argue against trying
than I was.

> > > > > In case this was not clear, by "purely applicative 
> > > > > language", I mean
> > > > > a language in which all expressions are built from a 
> > > > > finite set
> > > > > of primitives using a binary construct ("function 
> > > > > application").
> > > > > An expression in such a language is most naturally written
> > > > > using a binary tree, with primitives as leafs.

> > Your purely applicative language has MANY other constructs, 
> > ranging from
> > combinators to parenthesis to definitions.  So your 
> > language doesn't fit
> > your reading of that definition either.

> I suppose you are referring to the informal language I used
> in my posts; strictly, this language is not purely applicative,
> but is a sugared variant of a purely applicative system.
> The specific "other constructs" (combinators, parentheses, 
> definitions) 
> that you mentioned keep the language from being purely applicative
> are worth saying a few things about:

> First of all, expressions that denote combinators (such as
> "B", "C", "W", and "K") are primitives in the language.
> Recall in my definition of "purely applicative" that
> primitives (meaning atomic expressions that are assigned 
> precise meanings) are allowed, as long as there are only a 
> finite number of them. In fact, primitives are essential
> in a purely applicative system (Joy also relies heavily
> on primitives (like "dup", "swap", "dip", "map", etc.), 
> rather than fancy
> constructs; this is in contrast to languages like C and even
> Haskell, which rely on myriads of fancy constructs).

WHOAH!  Now I see what you meant.  You really threw me off by using the word
"finite" there.  (I don't know of any languages which use an infinite number
of primitives, by ANY definition of 'primitive'.)

I suspect you'd dislike one of my potential changes to Joy: I wanted to
remove all of the stack manipulation words and replace them with a single
syntax convention:

LET
 dup == stack(a--aa);
 drop == stack(a--);
 swap == stack(ab--ba);
 rot == stack(abc--bca);
 4reverse == stack(abcd--dcba)
IN
 your code
END.

The number of 'primitives' is still finite, but is now not bounded by the
ones I choose (plus, I don't have to think of names).  I also want to allow
the user to define similar parsers, in a manner similar to how Forth parses
numbers but user-definable.

Anyhow, you have an interesting approach to 'primitives'; the way I see it,
each different type of primitive is a seperate category; I don't group them
all simply by calling them 'primitives'.  Joy's primitives are all functions
(or all combinators, depending on how you view it; they're really the same
thing), and everything you define is also a function (and a combinator).
Essentially, there's no distinction between primitive words and user words;
the only distinction which exists in the system is the distinction between
syntax and words.  Joy has a lot of syntax; I personally suspect that I will
keep the quotation syntax, and I'll add the concept of immediate execution
and postponement, so that the coder can write words which act to modify the
compilation process, and then I'll throw in user-definable lexing (as shown
by the stack shufflers above).

I'll talk a little more later about your definition of "primitives".

> There is another way to eliminate parentheses which is much
> better. Simply write the application of "f" to "x" as "@fx",
> prefixing "@" to the function expression and parameter
> expression. Of course, any atomic expression would work
> in place of "@" (we chose "@" because it suggests "application").

Okay.  This is kind of interesting, because it makes curring explicit and
grouping implicit -- parentheses made grouping explicit and currying
implicit.  I can't say anything against the change, aside from commenting
that if you hadn't done it I would have continued to miss some interesting
things in your system.

I don't think that's to to any special qualities of the notation, though,
but rather the novelty.

> In a system with primitives "B", "C", "W", and "K",
> the expression that would be written with parentheses
> as "B(BW)(BC(BB))" would instead be written:

>   @@B@BW@@BC@BB

> Noting that the rewritten version is not any larger than the
> original, this kind of notation may be of practical use
> not only in formal languages of systems, but in human-readable
> languages as well. Note that in this notation, "@@fxy"
> is the binary application (in a curried sense) of "f" to "x" and "y";
> if this kind of notation is used in human readable systems,
> shorthands for "@@" and "@@@" would be nice. 

> Third, definitions... Actually, I don't recall using any sort
> of definition construct in my language, but it is something
> that will be needed eventually... In any case, I don't
> think a special construct is needed for definitions.

It actually is special -- you're disregarding the fact that any or all of
your 'primitives' could actually be 'special constructs'.  You imply that by
calling a language feature a "primitive" and giving it a name, you evade any
complications it causes.

But I'm getting ahead of myself.

In this case, the special action of the Define primitive is modification of
the system dictionary, a dictionary which is specially distinguished because
all word lookups start with it, even though it has no name of its own.

> In some systems, it may be possible simply to use an equality 
> primitive (a binary function yielding a proposition) instead.

I suspect that the author of Joy intended this with his use of ==.  I don't
think he succeeded, and worse still, he didn't provide any other way of
defining things.  (While I'm listing shortcomings, I also didn't like the
lack of immediate execution -- like Forth's IMMEDIATE word.)

Speaking of immediate execution, how would your system handle Forth-like
metaprogramming?  Does the code have full access to all of the unparsed
source?

> In my toy system, I plan on using a little different approach:
> a primitive "def" will be provided, which is a function
> that takes two parameters (a name to define and a thing to
> assign it to denote), yielding a procedure instructing
> the system to modify the system with the definition
> (and by "procedure", I still mean a function onto a
> procedure completion, so that procedures can be sequenced
> with composition). Also, another primitive "undef" will
> be used to undefine terms... There may be other primitives
> for inspecting the system's namespace; these specific
> procedures actually will probably not be taken as primitives...
> probably rather I'll have a "dict" primitive (a field storing
> the current dictionary) which can be inspected and modified
> with ordinary field primitives. 

> Anyway, definitions, in a way, are not really that a fundamental
> part of a system (at least, not the system I'm working on),
> although they are quite a visible part. Actually, 

They actually are fundamental -- if they weren't the compiler wouldn't know
how to look up the words you type.  As I mentioned, the System Dictionary is
a distinguished element in language theory.

> a finished system would of course not have just one static 
> system dictionary; each user of the system at least would 
> have their own. The whole issue of how the dictionaries
> (and thus definitions) will be handled will be taken care of by
> the system's shell(s), which interact with the users, and should
> (in the end) be quite flexible. 

Right.  This is one problem with Joy.

> > > Once again, a purely applicative system (in the sense 
> > > that I meant)
> > > does not use names for its parameters. Purely applicative
> > > systems instead tend to use combinators (primitive functions such 
> > > as "B", which composes a function, and "C", which swaps 
> > > its parameter 
> > > order, and others) to form functions. With a handful of
> > > such primitive functions, it is possible to form any function
> > > that could have been formed with a lambda (named 
> > > variable) construct.

> > This is true.  If you'd played with J or APL you'd realize 
> > that it quickly
> > becomes FAR to complicated to track parameters that way -- 
> > but it's still
> > possible.

> I think this may be a good point. I admit that I've done very
> little practical work with systems that use combinators to
> form functions (rather than lambdas). But I'm hopeful...

I have no reason do deny that hope; not even APL and J provide a full set of
combinators, so they just might be deficient.  Perhaps a full set will make
it easy.  Joy certainly makes it look easy, even though it doesn't depend on
combinators for everything.

> > > > In addition, because function invocation is complicated 
> > > > by function
> > > > application, proofs written in the language have to deal 
> > > > with the double action.

> > > The system you're describing (C, I believe) is nothing like a
> > > purely applicative system. 

> > It's precisely and exactly (in this quote at least) purely 
> > applicative.
> > Note that I said nothing about names above -- the simple 
> > fact is that in an
> > applicative language, function invocation is always 
> > accompanied by function
> > application; in other words, the language can't emit code to run the
> > function until it has the paramaters which will be passed 
> > to the function.

> I'm really not sure what you mean by "function invocation" as
> distinguished from "function application", and I don't know
> what you mean by "run the function"...

> By "function", I usually mean an abstract mapping from things to other
> things. The things that will be "ran" in the system will be
> procedures, not generally functions.

This is not something you mentioned before.  So your language distinguishes
between procedures and functions?  How?  Why?  Doesn't that have negative
implications as far as proofs go?  Wouldn't it make metaprogramming harder?

Is this distinction a necessary element of applicative languages of the type
you describe?  It's not present in any way in any currently existing
concatenative language.

> > (Named variables actually don't eliminate applicative 
> > purity; they eliminate
> > combinatoral purity, though, and I'm quite willing to agree 
> > that your language design can eliminate named variables.)

> Hmm... named variables do destroy applicative purity, at least if
> they are accompanied by a lambda construct to bind them. Anyhow,
> I think you agree that whether they destroy applicative purity or not
> they greatly clutter programs and make proofs about the system
> much more difficult.

I agree with that.

> > > > > Anyway, one clear benefit of using a purely 
> > > > > applicative language
> > > > > is that it has such an amazingly simple syntax.

> > > > I strongly suspect that you don't have a referent for 
> > > > "simple syntax".
> > > > Currying (the simplest known way to define an applicative 
> > > > language) is
> > > > indeed simpler than some of the complex parsing tricks some 
> > > > languages have
> > > > to do; however, currying is not enough by itself.  You also 
> > > > have to have
> > > > four other syntactic elements: one to delimit functions, 
> > > > one to define
> > > > names, one to count how many parameters a function has (and 
> > > > identify each
> > > > one), and one to change the curry order.  That's a total of 
> > > > five elements.

> I'm not really sure what you mean by each of those syntactic elements
> and certainly don't see that they are required in all purely
> applicative systems. I've given the syntax of a small purely 
> applicative language above, with these being some expressions in it:

>   @BW
>   @@BCC
>   @B@WB
>   K
>   @@B@BW@@BC@BB

> How are your 5 syntactic elements manifest in this language?

1) Currying is explicit.
2) Function delimitation is implied by a newline.
3) Name definition is not present (but you cover it later, and you don't
deny that it's needed); note that in the examples you give, the functions
are simply thrown away, no possibility of reuse.
4) Curry order modification is carried out by the absence of @ (as I
mentioned, you've only made grouping implicit).
5) Parameter counts are needed to actually run/use any of your functions;
your procedural sublanguage will need to understand them even if your
applicative language doesn't recognise them.  Also, your language will have
to keep count just in case the programmer tries to apply too many arguments
to a function.

I'm seeing something I didn't notice up till now, though.  I seem to see
that function delimitation isn't needed in either of our languages --
grouping/quotation is sufficient (and grouping in your language carries out
curry order modification).  This leaves your language with 4 outstanding
features and mine with 3; the missing element is parameter counts.

In your language parameter counts are very interesting and complex; a
parameter can be shunted through a _lot_ of combinators, some of them with a
runtime action (such as the 'select' combinator); and the end result has to
be known before the result can be run.  I'm not sure whether this means that
runtime combinators are impossible for you; and if it does mean that, I'm
not sure whether that denies you completeness or makes the programmer's job
harder.

> > Your language, by the way, adds another element which isn't 
> > listed in my
> > "must have"s: combinators.  Combinators seem to be special 
> > to your language,
> > like IMMEDIATE words are to Forth.  That definitely adds 
> > another element to
> > your language -- an element which Joy does not have.  (Joy includes
> > functions which act as combinators, but they do so without 
> > any special language support.)

> Combinators in my purely applicative language are treated as ordinary
> primitives; they do not have special language support any more
> than Joy has special support for "dip", "swap", "dup", and "pop".

Not true.  Combinators in your language change the order of execution; they
completely change the emitted code.  You don't seem to see a distinction
between primitives which drive the compiler and primitives which simply have
runtime behavior.  All of Joy's primitives have only runtime behavior; most
of your combinators require compile-time behavior.

This is the problem I was hinting about earlier, by the way.  You define
"primitive" such that it includes many very different words, including many
words which are actually syntactic in their effect.

So although Joy's primitives all include compiled support, none of them
require compile-time behavior.  They all do only one job: juggle the stack
at runtime.  If someone invents a new combinator which nobody in the past
ever imagined, it'll be a simple thing to add it to Joy (witness my little
stack-shuffler, which essentially allows the trivial definition of new
combinators), but to add it to a syntax-based language will be a headache.

> > > In a Joy test library, I saw this definition of "factorial":

> > >   [ [pop 0 =] [pop succ] [[dup pred] dip i *] ifte ] y

> > Good choice.  (That's an ugly way to do it and I wouldn't 
> > dream of writing
> > real code that way, but it uses the primitives so it's easy 
> > to translate and
> > is somewhat non-trivial.)  Note that a harder challenge 
> > might have been
> > appropriate for you: in Joy it's trivial to define new 
> > combinators; in fact,
> > there's no distinction between a user-defined combinator 
> > and a user-defined
> > function.  Is it equally easy in your language?

> Yes, it will be just as easy, provided some syntactic sugar is
> permitted in the language. Interestingly, Joy has a special
> construct for definitions, which could have been avoided...
> For example, one might define "popd" as "[dip] pop" in Joy by:

>   popd == [dip] pop

> Yet this could have achieved without treating "==" specially,
> if they had added a "define" primitive, in this sort of way:

>   "popd" [[dip] pop] define

> "define" could be treated as an atomic program (that has an effect
> other than on the stack, like "put" and "get") that modifies
> the interpreter's namespace... Of course, one would still like
> to have "==" as sugar. This is the approach I plan on using for
> definitions in my system.

But in Joy "==" is more than sugar -- it's a fundamental part of the
notation.  The model of Joy is that functions take a single parameter (the
stack) and return a single parameter (another stack); there's not supposed
to be a special dictionary available to them; the dictionary is supposed to
be immutable.  The '==' syntax is intended to make Joy programs look like
proofs; notice how the language syntax is set up so that it's not possible
to define words at arbitrary points?  That's to keep the fiction that the
programmer never actually defines words, but rather states equalities.

The only odd part here is that while a real proof could have arbitrary text
on either side of the equal sign, a Joy definition must always have a single
word on the left side.  Thus, a real equality could say:

  dup drop == id

While in Joy you can only say

  id == dup drop

Hmm, that might be an interesting extension ;-).  Multiword '==' definitions
would be the same as rewriting rules, which might be useful for
optimizations.  That would require a pretty sophisticated logic engine,
though.

At any rate, though, I agree completely with you that a real language would
certainly have a modifiable system dictionary.  Joy is more a proof of
concept -- or even a tool to be used for proofs of a concept -- and as such
it takes on certain restrictions which limit its usability.

> > The definition of 'y' in Joy is:

> >  y == [dup cons] swap concat dup cons i;

> Similarly, in my system, the Y combinator might be defined as:

[...]
>   @@Define "Y" @@BM@@CBM
>   @@Define "M" @WI

Good one.  Thus I see that any definition in your language which includes a
combinator without all of its 'inputs' applied is itself a combinator.  This
answers my question of how your users will be able to create combinators.

> String literals would be a bit messier to unsugar if they were
> more than one letter long (one would have to use a variation
> of the "T" combinator, applying it to each of the character
> primitives in turn, using currying).

Don't bother -- sugar is good.  If the phrase "syntactic sugar" has bad
connotations, use "syntactic salt" instead.  :-)

> Of course, the factorial function could be written more concisely
> using a function like Joy's "primrec" as:

>   Primrec (= 0) (K 1) Times

> The parentheses could probably be omitted, the spacing giving
> the structure:

>   Primrec Q0 K1 Times

That's not a good idea -- spacing has no intrinsic superiority over parens,
but parens make the grouping patterns a lot more flexible.  Plus, using
spacing would destroy the ability to use named combinators in those places.

BTW, is the following correct?

@@@Primrec @= 0 @K 1 Times

> > > Hopefully you do not see a purely applicative approach as all
> > > so bad now, seeing how similar in spirit it is to Joy.

> > Well, I'm not sure that a single example can show that 
> > much.  Can you refute
> > my claim that Joy is simpler than even a truly pure 
> > applicative approach
> > (and your language isn't pure, but rather adds the notion 
> > of combinators to an otherwise applicative base)?

> Well, that depends on what precisely you mean by "simpler".
> A purely applicative approach is simpler in the sense that
> the only constructs needed are primitives and function-application,
> while the Joy approach needs at least primitives, concatenation,
> and quotation...

You're ignoring your 'default' action, which is grouping -- what your
language does when there's no @ sign.  At the same time you're paying
attention to my default action, concatenation.

> You seem hesitant to accept that quotation is a real 
> construct of Joy...

Why do you say that?  I've listed it as such in every list I've given.  The
only time I acted suprised was when you claimed that it was "the
fundamental" construct of Joy.  It's clear that if any single concept has to
be identified with Joy, it's concatenation.

I've called it "function delimitation" in the list I gave.

> In Joy, as the authors have formalized it, expressions are built
> from primitives using two constructs: concatenation and quotation.
> In purely applicative systems, only application is needed.
> Parentheses often play the role of grouping that quotation does
> in Joy, but parentheses can be eliminated as I've shown above.

Very good point; I hadn't noticed that grouping was analogous to quotation.

Of course, this is as much a statement about Joy as a statement about purely
associative languages.  It's clear that making concatenation explicit in a
purely concatenative language would also result in allowing quotation to be
implicit.  It's a matter of choosing which one you want implicit.

In your applicative language, you implicitly group and explicitly curry.  In
other applicative languages they explicitly group and implicitly curry.
Thus application requires grouping.  In Joy a programmer implicitly
concatenates and explicitly quotes; making concatenation explicit would
allow grouping to be implicit.

As an example, the Y combinator would (possibly) be written thusly in an
explicitly concatentive dialect of Joy:

 dup.cons swap.concat.dup.cons.i;

I haven't worked out whether more complicated forms also work this way; I
would have to do some thinking, and I don't feel like putting effort into
something so completely silly ;-).  I suspect that I'd have to use a prefix
operator rather than an infix one.

..dup cons ....swap concat dup cons i;

Sick.  :-)

It's interesting to note that by and large, quotation is rare in Joy (and
Forth), but grouping is very common in the applicative languages I've seen.
This makes the choice of which operation to make implicit easy in a
concatenative language, whereas it's much harder with an applicative
language.  In some problems grouping is common enough to make your @
notation look quite attractive; in other problems, implicit currying is
easier.

Of course, there are counterexamples for every so-called general rule; have
I simply mistaken the counterexamples for the rule?

> > Furthermore, having looked at the proofs written in the Joy 
> > manuals, can you
> > duplicate any of them as simply?

> I believe so... I'm not sure exactly which proofs you're 
> talking about...
> One proof that stands out was the "naturality of 'cons'". Here was the
> proof they gave in the manual (in "The Algebra of Joy"):

As you note, that's a poor proof on their part -- not a very good choice,
unless your point was that the author of Joy has made mistakes.

> Perhaps this was not such a good choice of thing to proof, since
> it is not really analagous to the proof in the Joy manual; if you
> have a favorite proof in the Joy manual, maybe I'll try that one.
> Anyway, that was probably not exactly the cleanest way to do 
> it, either.

Nice proof anyhow.  Thanks.

> Anyway, both the proof I just gave and the Joy proof are informal
> proofs about systems...

All proofs are informal from some point of view.  You just have to match
your assumptions up with your audience.  At least, that's what my books say.

> One interesting thing is that both proofs
> used variables (in the "epi-language", not in the formal 
> language itself);
> in the case of the purely applicative system at least, the proof could
> be rewritten without variables at all using the Combinatory Axioms
> (and abstracted versions of the Fst, Snd, and Nil axiom schemas).

I postulate that the purely applicative system is equivalent to the
concatenative system.  I won't try to prove that, formally or otherwise.

Thanks to your explanation I've clarified and refined my views.  One of the
reasons I thought concatenative logic was better was that I didn't know that
applicative logic could handle unspecified variables; it's clear now that it
can.  It merely does so differently, by looking forward in the text of the
function rather than back into the data of the program.

At this point I'm going to go out on a limb again and try to make a
distinction between the two systems: 

The applicative system requires full information about the complete
"program" (I define a program as a complete set of functions and data) in
order to run any part of it.

The concatenative system does not require this much information; all we have
to know is the portion of the program prior to the part you wish to execute.

In other words, the applicative system is in line with the ideals of Tunes.
The concatenative one doesn't contradict Tunes, but does allow execution
without complete understanding.

An applicative system only makes sense as a declarative system.  A
concatenative system also makes sense as an imperative one.

I think this last feature is what attracts me to concatenative logic.
Modern computers are by their nature imperative, and many of the problems
solvable by them are well-described by imperative processes ("programs").
(Quantum computers, according to my understanding, are much more
declarative.)  Concatenation gives a formal, functional view which
nonetheless comes closer to the actual processes taking place inside the
computer.

This gives concatenation a real edge, because it can also be used as a
declarative language; it's all in which words you choose to run.

But I may be speaking without complete understanding again ;-).

> This would have been a bit trickier, though, and I'm not
> really too familiar with the combinatory axioms, so I didn't
> attempt it. To give you a flavor of how this might work though,
> here are a couple combinatory axioms:

>   B(BB)B = C(BB(BBB))B

It's going to take me a LONG time to see that as more than noise.  (Not a
problem -- consider that most concatenative systems have "stack noise".)

>   BCC = I

This one makes sense, once I read below to recall what the single-letters
stand for.  (BTW, is it really _that_ important to use only a single letter
for combinators?  If so, there's a REAL advantage that concatenation
posesses over application. ;-)

  converse converse == id.
  (boolean) not not == id.

> The first one states the associativity of composition;

Ah.  In that case, would it be true that:

 concat concat == stack:(abc--cab) concat swap concat;

expresses the same thought in a concatenative form?  (Coincidentally, it
expresses the rule both for lists and for quoted programs.)  I find that to
be easier to read; is that just me?  Or am I just stating a special case of
a more general proof?

> for, from it it follows that

>   B(BB)B F G H = C(BB(BBB))B F G H
>   BB BF G H = BB(BBB)F B G H
>   B(BFG)H = B(BBBF) B G H
>   B(BFG)H = BBBF BG H
>   B(BFG)H = B BF BG H
>   B(BFG)H = BF(BGH)

Sorry, I'm lost.  I don't know most of the combinators, certainly not by
their ISO standard one-letter names.  I also forgot -- just for the moment
-- the name of the Forth/Joy word which has the stack effect I wrote above
(abc--cab); it's either 'rot' or '-rot', and rather than try to remember I
simply wrote an explicit stack juggle.

> or in other words, that

>   (F.G).H = F.(G.H)

Ah, I get that!  :-)

> These two axioms are not particularly special, but with a
> handful of ones like them, a sort of completeness follows;
> it would be possible to directly show that

This completeness is what you're talking about when you refer to a "finite"
number of primitives, right?

> Anyway, hopefully this example has helped you gain a feel for
> the elegance a purely applicative system can have.

Definitely has.  Thank you again.

> > I've recited already the five syntactic elements that every 
> > applicative
> > language needs in order to produce code; I've also recited the three
> > elements that a concatenative language needs.  I've also 
> > made the claim that
> > your language needs one additional element (combinators) 
> > which is distinct
> > from any of the other five needed by a truly pure 
> > applicative language.

> > Can you make any counterclaims?

> This was mentioned above... I don't see that those five
> "syntactic elements" are essential in every applicative language
> or how they are evident in the one I've presented...

They were certainly present in the language you _had_ presented.  Four of
them are still present in the language you've created for this post;
Concatenative languages still only need three.

> And, combinators are not particularly special elements in
> the language I've given, but ordinary primitives.

But not all of your 'ordinary primitives' are especially ordinary.  It seems
to me that you're bundling a lot of very different functions under the name
of 'primitives', and claiming that because you can call them by a single
name they're therefore simple.

It seems to me that the compiler will have to be specially written in order
to merely use many of the combinators.

> Okay... Earlier I said that I'd give some reasons to like
> purely applicative systems at the end of the post, so here are some
> (these are reasons to prefer purely applicative systems over
> similar systems that are mainly applicative, but with some extensions,
> such as infix operators):

>   1) The syntax is simpler. In a computing system, this may be
>      beneficial because it will make parsers, compilers, 
>      and other programs that deal with expressions (such as
>      a program that transmits information between machines,
>      over networks) easier to construct. 
>        (this is quite a minor one, compared to the next).

Yes, although I suspect that a certain type of simplicity of syntax helps
make proofs easier.  So it's not _that_ minor.

>   2) It ensures that the system can be made combinatorialy complete
>      with a finite set of combinators. If a system has an
>      infix operator that cannot be used as a first-class function,
>      then it is not generally possible to emulate functional
>      abstraction with combinators (rather, one would likely
>      need to resort to a lambda construct).

True.

Here's two reasons to prefer a concatenative system (this adds to your
reasons):

3)  Translation to machine code is simpler than translation of an
applicative system.  This is minor; Tunes doesn't want to use something
simply because it's simpler.  I personally do want that, but I'm influenced
by Chuck Moore here.
4)  The most commonly used metaprogramming systems currently are
concatenative, although almost no languages now are concatenative.  I'm
talking about Postscript -- probably the most commonly used language (since
it's used almost every time people print), and almost never used in any way
other than metaprogrammed/machine generated.  Thus, practical experience
seems to indicate that people writing metaprograms are comfortable using
concatenative languages.

> So, what do you think of purely applicative systems now?

I'm happier with your ideas than I was before -- but I'm still skeptical.
You see, all the languages developed thus far (with the exception of Forth
and Postscript) have been applicative languages, and I don't really like the
results.

But don't let my skepticism slow you -- one problem I have is an
unwillingness to work on something which isn't new to me.  Innovation is my
goal, but it's not the only good goal -- you're working on refinement, and
that's also a noble goal.

> - "iepos" (Brent Kerby)

-Billy