FW: Joy, Lambdas, Combinators, Procedures
btanksley@hifn.com
btanksley@hifn.com
Sat, 12 Feb 2000 14:16:19 -0800
Okay, this (should be) the real copy. Unless I hit the wrong button again,
in which case this message is simply your recommended daily allowance of
irony.
This is a greatly fun discussion for me. If anyone on this list disagrees,
please let me know and we'll take this into email.
Oh, I'm going to do a lot of clipping and summarizing, including the good
examples/informal proofs you sent. I appreciate them, but there's no need
for me to re-include them just because I like proofs. This won't reduce the
size of the message enough, but there's only so much I can do about that.
> From: iepos@tunes.org [mailto:iepos@tunes.org]
> Yeah... I'm glad you see what I mean now... one thought that might
> be revelant: a system might like to formalize as primitive expressions
> the number literals 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ...
> strictly, this is not allowed in a purely applicative system since
> it would require an infinite number of primitives. Instead, one
> could take as primitives only the numbers 0 through 9 (and maybe
> "ten"), and then refer to the other numbers using multiplication and
> addition. For instance, the number 532 could be referred to as
...and with that I'll leave that idea behind, strongly hoping to never see
it again. Sorry, but as much as I like the theory you're bringing to the
table, I really, really hate messing up a good language with theory. Don't
forget that the purpose of a programming language is to allow people to
express things to the computer. Having a finite number of "primitives" is
not required by any theory you've stated; it's simply an arbitrary
restriction you've placed upon yourself.
> > I suspect you'd dislike one of my potential changes to Joy:
> > I wanted to
> > remove all of the stack manipulation words and replace them
> > with a single
> > syntax convention:
> > LET
> > dup == stack(a--aa);
> > drop == stack(a--);
> > swap == stack(ab--ba);
> > rot == stack(abc--bca);
> > 4reverse == stack(abcd--dcba)
> > IN
> > your code
> > END.
> Well, your "stack(--)" construct would then be much like a
> lambda construct
> of applicative systems (note that it requires the use of variables
> like "a", "b", and "c"). This would probably cause
> complications in proof systems.
Not at all, because the pseudovariables would not intermingle at all with
functionality. It's nothing more than a new naming convention for a subset
of the combinators, really. The 'C' combinator under this system has the
name 'ba' or 'stack:(ab--ba)'. (In my original design and implementation
under Pygmy Forth I accepted input of the simple form 'abc--cba' rather than
requiring the 'stack:()' notation.)
Do you see what I mean by "new naming convention"? These are nothing more
than different names for the same old combinators. It's fortunate that I
can write the combinator names in a way which resembles their actual
operation; it's an implementation detail that none of the names are actually
defined in the dictionary.
> However, such a construct would still be quite interesting...
> It could be generalized so that arbitrary Joy expressions were
> permitted on the right of the "--"; then, for instance,
> "stack(ab -- b a foo a)" would be the program that takes two
> stack items, swaps them, and then runs the program "foo",
> but leaving an occurence of "a" on the stack;
> it could also be written as "dupd swap foo", I think.
No, that WOULD be similar to a lambda construct. I would NOT want that
thing in the core language. Of course, it might be in one of the extension
libraries -- at times it would be quite useful. For example:
concat concat == (abc--ab concat c concat)
would express a proof we look at below without any stack juggling. Of
course, it does the exact same thing as the stack juggle.
> One interesting question to ask about Joy would be,
> "Could all uses of 'stack(--)' be eliminated using standard
> Joy programs like dup, pop, swap, and dip?" ...
> I'm guessing the answer is "yes"; I'd be curious to
> know of a specific algorithm that could be used to carry
> out such eliminations
The answer is yes; DIP in specific can be used to build any combinator,
since it removes one element from the stack and holds it safe. (Forth
doesn't have DIP, so Forth adds a return stack, which gives the same
functionality.) If I ever get the time to write this compiler I'll write
the algorithm to do this; it's pretty simple. Last time I tried I cheated
by using assembly and hitting the registers directly (thus forcing a limit
of 4 stack items per swap-op -- don't you love Intel?).
> > The number of 'primitives' is still finite, but is now not
> > bounded by the
> > ones I choose (plus, I don't have to think of names).
> I'm not sure what you mean by this. There are an infinite number
> of expressions of the form "stack(...--...)", so you can't
> take them all as primitives (in the ordinary sense of "primitive").
If it's impossible to have an infinite number of primitives, then why did
you list "a finite number of primitives" in the list of features your
language has? Seriously, though, 'primitives' is perhaps a little too
general -- a 'primitive operation' is certainly possible to express in
syntax.
> Of course, you could just take "stack(--)" as a new special
> construct in addition to Joy's concatenation and quotation.
This is essentially what it is. As such it's no different from any of the
sets of primitives in either of our languages. Except, of course, that this
primitive, like all of the Joy words (and unlike Joy's quotations and
definitions), adds no parsing effort outside of its own letters.
> > Anyhow, you have an interesting approach to 'primitives';
> > the way I see it,
> > each different type of primitive is a seperate category; I
> > don't group them all simply by calling them 'primitives'.
> It is true; most systems would need quite a few primitives, which
> one might want to group into categories when specifying the system.
The reason I'm calling your grouping into question is complexity; I believe
that your system has very important categories which you're ignoring. For
example, your primitive functions bear little resemblance in behavior to
your primitive combinators. Your functions simply use their parameters;
your combinators have to select and choose parameters.
> > Joy's primitives are all functions
> > (or all combinators, depending on how you view it; they're
> > really the same
> > thing), and everything you define is also a function (and a
> > combinator).
> Usually a distinction is made between a "function" and
> a "combinator": a function is a mapping from parameters to results,
> while a combinator is a special kind of function, a function
> that can be expressed using only closed lambda expressions
> (or, in Joy-style systems, a program that can be expressed
> using a "stack(--)" expression with nothing occuring on the
> right side except variables bound on the left; for instance,
> "dup", "swap", "pop", "rot"). The precise meaning of "combinator"
> may vary a bit among uses, but calling all functions "combinators"
> is getting a bit nonstandard.
This is practically true for applicative languages, but breaks down almost
completely for concatenative ones. In such languages, ALL words (including
so-called combinators and special forms) are actually functions from a stack
to a stack. In addition, according to the foundational axioms of
K-complexity theory, all functions can be built from a small set of specific
combinators (I seem to recall S and K) combinators, so all functions can
therefore be regarded as combinators in any system (including yours).
If your system is going to use nolambda as its base language, then, I would
have no quibble; everything for you would be a combinator (since nolambda
appears to be based on K-complexity theory). However, odds are that you're
going to be a bit more practical, and thus you're going to have a
distinction between functions and combinators.
> > Essentially, there's no distinction between primitive words
> > and user words;
> Also in my system, I don't plan to make a sharp distinction between
> words that are "native" to the system (if there are such words)
> and words that have been defined meaning by the user. In general,
> I've meant to call all words "primitives" (by "words", meaning
> expressions that have no gramattical structure in the system),
> although I guess maybe I've deviated a bit from this terminology...
I would simply call them "words". A word which is primitive to the system I
would be tempted to call an "primitive word", although I have no innate
objection to simply calling it a "primitive" (so long as the definition's
explicit).
> > suspect that I will
> > keep the quotation syntax, and I'll add the concept of
> > immediate execution
> > and postponement, so that the coder can write words which
> I'm not sure what you mean by "immediate execution" and
> "postponement"...
In Forth, words can be defined which act immediately even in the middle of
compilation; for example, the Forth word POSTPONE acts immediately to scan
ahead in the input stream, look up the next word, and compile its action
into the current definition. (Note that I not only gave you the definition
of 'immediate execution', I also gave you the definition of 'postponement';
it's called postponement because it can postpone the compile-time action of
the next word until runtime.)
This would allow me to include language modifications like:
if [test] then [body] else [alternate].
(Note that in order to do this, 'if' has to scan ahead.)
It will also allow me to avoid quotation, which will be useful for my idea
of the LLL. I'll just do things the way Forth does them: you're not allowed
to treat a definition as a list. This reduces expressive power, but with
the macro ability provided by immediate execution and postponement this
should matter not at all.
> > > that will be needed eventually... In any case, I don't
> > > think a special construct is needed for definitions.
> > In this case, the special action of the Define primitive is
> > modification of
> > the system dictionary, a dictionary which is specially
> > distinguished because
> > all word lookups start with it, even though it has no name
> > of its own.
> This is true. The system dictionary(s) would be a bit special
> in that they will need to be referred to by the system whenever
> it needs to interpret something the user has written. However,
> the fact that there may be more than one system dictionary
> (for different users) tends to make them seem less special.
Not true; the fact is still that all lookups start at the system dictionary,
and allowing each user to have their own system dictionary doesn't change
that at all, any more than giving them their own computer changes the fact
that MacOS tends to crash ;-).
> But anyhow, that definitions and system dictionary were "not
> special" was
> not my main point; my main point (which you have not contradicted)
> was that a special language construct is not needed for definitions.
I have contradicted that; although it's true that no special syntax is
needed (an obvious point proven many times over), the primitives which
access the system dictionary are in a special class of operations, a class
which can't be proven using the same rules which apply to everything else:
they take effect on a mutable structure which can affect every word in the
following source (in a potentially very confusing way).
In a very real sense it's impossible to 'reason' about a word "Define".
> > -- you're disregarding the fact that any or all of
> > your 'primitives' could actually be 'special constructs'.
> I'm not quite sure what you mean by this... I usually do not call
> expressions "primitives" if they have a grammatical structure
> (i.e., if they are formed from parts that independently have
> meaning)...
A grammatical construct is a special construct, but a primitive word can
have grammetical effect. In this case, most of the words you call
combinators have grammatical effect, in that they affect the parsing of all
the words which follow them.
> > You imply that by
> > calling a language feature a "primitive" and giving it a
> > name, you evade any complications it causes.
> I don't recall saying that; but, it is true that replacing a
> fancy language
> feature with a primitive or two can avoid some (but usually not all)
> complications associated with the feature.
I disagree with that. What you call primitives are in many cases themselves
fancy language features. The fact that all of your combinators can be
expressed in terms of a few simple combinators doesn't change the fact that
all combinators -- especially including those few ones -- require special
parsing from the compiler.
> > Speaking of immediate execution, how would your system
> > handle Forth-like
> > metaprogramming? Does the code have full access to all of
> > the unparsed
> > source?
> Probably yes... As an example, if a user has been working on writing
> a complicated program, he would probably have given a name to
> a field for the program's text. Conceivably, this name could
> occur within the text itself, so that the program could refer
> to its own text cleanly.
This is not Forth-like; in fact, it's a superset of Forth's capabilities
which will force you to re-parse the file every time it's used. That's
pretty brutal. Forth's metaprogramming only parses text which the
interpreter hasn't touched yet. I don't believe that this is even possible
with your system; all text has to be parsed before anything can be
evaluated.
The nice thing is that it might be possible for you to manipulate the parsed
structure at compile-time, given the capability to handle compile-time
execution. This can be far more effective than text parsing, especially if
you have any syntax in your language.
I'm afraid that this implies procedural operation (because it implies
'execution' of code, something which isn't obviously functional), and you
haven't covered how your system handles procedural operation yet.
> Of course, this is not to say that users will need to write
> complicated programs very often; in more cases than with
> traditional systems hopefully, a user would be able to get what he
> needs done by issuing a few commands to a shell, without resorting
> to a full-screen editor, and without naming his "program".
As with APL, I fully expect one-liners to be very common. However, as in
APL, I expect most users to name their one-liners, and expect to use them
later. APL has the concept of workspaces; everything you create in the
interpreter is saved in the workspace and can be used at any time later.
You might look at APL for inspiration (unfortunately, J doesn't use
workspaces).
http://juggle.gaertner.de/is the address of a good J page. I got there from
dmoz.org.
You have a good reason to suspect that you won't need Forth-style
metaprogramming very often: Forth has no support for modifiable functions,
and you have partial support. Your system doesn't currently have any
obvious ways to reach inside a completely applied function and modify it,
but you can play with the variables about to be passed to a function. (In
this respect, Joy is clearly _much_ more expressive, since manipulating a
quoted program as a list allows arbitrary modification.)
> On the other side of metaprogramming, languages in the system will
> probably provide ways of referring to the systems own
> low-level structures, including, for instance, procedures for
> peeking and poking at the system's raw memory, or doing raw
> I/O. Using these procedures, the system could change the
> way it works (it could even change itself so that it did not
> work at all).
That's (provably) always a potential 'feature'. :-)
> It would be hoped that the system's activity could be formalized
> well enough so that the system could safely change itself
> with these procedures.
I hope so, yes. Until the theory is worked out, it can be nice to have the
raw capability, even if we know that someday it'll go away.
> which needs it to look up the words users types, but the shell
> isn't really that fundamental a part of the system; not nearly
> so fundamental as, say, the system's "compiler", which
Why would you claim that? I concede that your system can run without names,
and thus without dictionaries of any kind, but any source with names of any
kind is heavily affected by the system dictionary. If you provide _any_
primitives of _any_ kind which modify that dictionary, then you must
acknowledge that those specific primitives are special.
I have to agree with you that definitions don't _have_ to be included in
either of our systems, though. It would be a bear to work with, but it
would be *possible*.
> > > things. The things that will be "ran" in the system will be
> > > procedures, not generally functions.
> > This is not something you mentioned before. So your
> > language distinguishes
> > between procedures and functions? How? Why? Doesn't that
> > have negative
> > implications as far as proofs go? Wouldn't it make
> > metaprogramming harder?
> This is quite a few questions... I'll see if I can explain...
Grin.
> In my system, I might like to talk about the function of squaring
> (multiplying a number by itself); this is a function and
> not a procedure. By "procedure", I mean a specific command
> that something should be done, or the idea of that specific thing
> getting done. In this sense, here are some examples of some procedures
> (written in English):
> - Go home.
> - Reboot yourself.
> - Output "Hello World" to the user.
"Square a number." Okay, I'm with you. So 'procedure' connotes a process
taking place in time, whereas a function could denote a step of that
process, but doesn't state anything about time.
> These are procedures and not functions (except perhaps in
> the sense of a function taking another procedure as the next thing
> to do, yielding a completion, if one thinks of procedures that way).
I need to stop a moment and check something with you.
So to make a function act procedurally, you need to transform it into a
"completion" by attaching a next-thing-to-do to it, correct?
That makes sense to me... Interesting.
Okay, I see a problem. Not all procedures can be expressed as functions --
for example, printing to the screen. That's a pure procedure, with nothing
functional about it. Let me study how you handle that (next paragraph)...
> And, in this sense, the idea of "Output" (like the "printf" of "C")
> is not itself a procedure, but a function that takes a string
> and then yields a procedure.
Okay, so it's possible to have computed procedures. This only makes sense.
Let me try to write a brief program:
& !@Output @stringify @@+ 4 5 stop
(In this program, I'm using "!" as a prefix, dyadic operator which binds a
"next action procedure" to a function, producing a completion; and "&" as a
prefix, monadic operator which executes a single procedure. 'stop' is a
predefined procedure which you've discussed before.)
This doesn't work -- (@Output x) is a function, as required, but as a
function it only acts to return a procedure. That's not useful right away;
we want to _execute_ a procedure, not return one.
Hmm. Perhaps you need a function/combinator which acts to execute a
procedure. I'll call it "exec".
&
! @exec @Output @stringify @@+ 4 5
stop
Okay, that makes a little more sense. (I'm not allowing myself to think
about the logic involved in a function which executes a procedure -- that
doesn't even begin to make sense -- but it's the first thing I thought of.)
However, in addition to that odd function, I've added two punctuation marks
to your language. Thus, I must have gotten something wrong.
> Joy's setup is cleaner than C's of course; functions are emulated
> using procedures that pop and push things off the stack. A procedure
> could be talked about without being executed by using []s.
No, procedures and functions are the same in Joy; nothing's being emulated.
Any source may freely be executed or reasoned about. You're confused by the
fact that concatenative languages can be viewed freely as both procedural
and functional; as a procedural language they have words which are
procedures which take and return variable numbers of parameters, but as
functional languages they have words which are functions and take and return
a single parameter, an item of type "stack".
Words in Joy are thought of in two ways: first, as procedures which pop and
push things onto and off of a single system stack; and second, as functions
which take a stack as a parameter and return a new stack as a result.
In other words, Joy functions are also conceivable as abstract mappings.
The difference is that Joy's functions can also be pictured as very
efficient procedures, _with no semantic change_. That's very significant;
in fact, it's a HUGE step closer to a complete formal view of the way
computers actually operate, with each operation taking as input a "state"
object and producing another "state" object as output.
It's also mildly interesting that adding explicitly non-functional words to
Joy does not destroy the functional character of the language; those words
may be trivially discovered in any source text, and reasoning about them
prevented or special-cased. They fit in well with the functional words, and
the same reasoning can apply (so long as you don't carry out the
non-functional aspect).
> But, in a purely applicative system, this setup seems unnatural;
> functions are primitive in the system (having connections with
> the system's application construct) and do not need to be
> emulated using procedures.
Procedures, on the other hand, have to be "emulated" (whatever that means)
using a complicated system of built-in operations, which are not functions.
This, to use your own words, seems unnatural.
> I still do not quite get exactly what you mean by "parameter counts".
A function can't be 'executed' in any way I know of until all of its
parameters are filled (otherwise its value is undefined). Thus you have to
keep count of parameters. This is more complicated than it might seem; you
can't simply trace through the code and keep a type stack, as you can with
Joy; you have to parse the entire source and consider a potentially huge
tree of parameters.
For example, in your language, is this an error?
@@@ackermann 4 5 6
(Ackermann is a two-parameter function.)
> > In your language parameter counts are very interesting and
> > complex; a
> > parameter can be shunted through a _lot_ of combinators,
> > some of them with a
> > runtime action (such as the 'select' combinator); and the
> > end result has to
> > be known before the result can be run. I'm not sure
> > whether this means that
> > runtime combinators are impossible for you; and if it does
> > mean that, I'm
> > not sure whether that denies you completeness or makes the
> > programmer's job
> > harder.
> Not sure what you mean by "select" combinator or "runtime"
> combinators.
A 'select' combinator is an example of a runtime combinator. Think of the
Forth word 'PICK', which takes an integer "n" and pushes onto the stack the
n-th item. Joy has no such combinator, although it would be easy enough to
build one by using a loop and 'dip'. The point is that sometimes you don't
know which item to take until runtime.
> There is one issue that I think may be underlying a lot of your
> comments...
> In a Joy system, it is easy for an interpreter to tell
> what to do just by looking at an expression; for, an expression
> In contrast, in the type of system I'm describing, it may not always
> be easy for an interpreter to tell what to do just by looking
> at an expression. An example may help... consider the expression
That was essentially my point -- not only that it's harder to "compile" this
language, but also that it's harder to merely parse it.
> > > Combinators in my purely applicative language are treated
> > > as ordinary
> > > primitives; they do not have special language support any more
> > > than Joy has special support for "dip", "swap", "dup", and "pop".
> > Not true. Combinators in your language change the order of
> > execution; they
> > completely change the emitted code. You don't seem to see
> > a distinction
> > between primitives which drive the compiler and primitives
> > which simply have
> > runtime behavior. All of Joy's primitives have only
> > runtime behavior; most
> > of your combinators require compile-time behavior.
> Again, my language provides a way to express abstract functions,
> procedures, and other things. "compile-time behavior" and
> "runtime behavior" really have nothing to do with the language.
> So, I reaffirm that combinator primitives are ordinary primitives
> and do not have special language support; however, it is true
> that they will probably need to be dealt with specially by an
> interpreter or compiler.
Runtime behavior may very well be irrelevant to your compiler (although I
don't see the use of a language without runtime behavior); however, it's
literally impossible that your language has no compile-time behavior. It
has to at _least_ have the behavior of reading characters from the source
file and redirecting them to /dev/null. From what you've described, it has
a LOT more behavior than that. And my point is precisely that parsing your
language is difficult. The main thing which makes it hard is your
combinators, which in many cases have the effect of reordering the parse.
> > So although Joy's primitives all include compiled support,
> > none of them
> > require compile-time behavior. They all do only one job:
> > juggle the stack
> > at runtime.
> Well, some Joy primitives do other things than juggle the stack
> (such as "put", "get", and "gc"), but I see your main point, that
> every Joy primitive represents a program that can be easily
> compiled, while the same is not true for my system.
'gc' is a no-op (in theory). 'put' and 'get' are not formalized in any way;
however, considered simply as stack-effect machines, they're quite simple.
The interesting thing is that they _can_ be considered that way; any
"rewriting rule" (a misnomer which I'll discuss later) which applies to a
function with the same stack effect will also apply to 'get' or 'put'. This
is not true for the applicative language; it demands truly functional
behavior.
> > If someone invents a new combinator which nobody in the past
> > ever imagined, it'll be a simple thing to add it to Joy
> > (witness my little
> > stack-shuffler, which essentially allows the trivial
> > definition of new
> > combinators)
> "my little stack-shuffler", i.e., the dreaded lambda
> construct in disguise ;-)
Grin. I do intend to allow enough power to permit the user to add a lambda
construct parser -- after all, as much as I despise lambda ;-), there are
many problems where it's either the most appropriate solution or a very
well-studied one.
> > but to add it to a syntax-based language will be a headache.
> I'm not sure what you mean by this. I guess by "syntax-based language"
> you mean a non-concatenative one (for instance, my system);
No -- I simply mean any language which requires the use of rewriting rules
(or something even nastier, such as C's syntax) requires the compiler to
know about that syntax.
> I don't
> see how it would be such a headache to add a new combinator in
> my type of system... all combinators can be constructed from just
> a few (for instance, "B", "C", "W", and "K"), so if the system could
> handle procedures written using those few combinators, then it could
> handle procedures written using other ones, by just replacing
> occurences of the new ones by their definitions in terms of
> the old ones.
Look up K-complexity on the web -- it's the theory of reducing functions to
just a few combinators. (Your compiler could theoretically work that way.)
The problem is that it's a very hard task; also, you still have to have
special support for the basic set of combinators, as well as descriptions of
all the other combinators in terms of them.
> > > while the Joy approach needs at least primitives, concatenation,
> > > and quotation...
> > You're ignoring your 'default' action, which is grouping --
> > what your
> > language does when there's no @ sign. At the same time
> > you're paying
> > attention to my default action, concatenation.
> I'm not sure what you mean by "grouping" in a system based on "@".
> There is no "grouping" construct... The only construct is application.
> there is no default, no action. this is only a language. but,
> maybe I've misunderstood you.
There _is_ an action -- your parser has to at least dump characters into
/dev/null, as I've said before. There is an operator which your @ makes
implicit; the action of this operator affects evaluation. The operator can
be formally defined, although it's complicated enough that I'm not eager to
do the definition.
Okay, I'll take a try at it -- the blank space operator in your explicitly
associative language has the operation of arranging its two operands on the
same level of an initial parse tree, as children of the last application.
This parse tree is temporary; combinators can rearrange it. The name of the
operator, if it were made explicit, would be "sibling" or something like
that.
The default action in curry-based languages is easier to explain; it's
currying, of course.
> > As an example, the Y combinator would (possibly) be written
> > thusly in an
> > explicitly concatentive dialect of Joy:
> > dup.cons swap.concat.dup.cons.i;
> > I haven't worked out whether more complicated forms also
> > work this way;
> > would have to do some thinking, and I don't feel like
> > putting effort into
> > something so completely silly ;-). I suspect that I'd have
> > to use a prefix
> > operator rather than an infix one.
> I don't think it would work. How would you represent "[]" or "[dup]"?
> But, then again, it would be completely silly, so it doesn't really
> matter.
Good point. You'd have to have a single primitive with the effect of [].
So let [] be 'nil', then [dup] would be '.dup nil'. Okay, now we're
equivalent.
> I'm not sure... I would guess that the approach with
> "implicit currying"
> would probably turn out to be the most convenient, especially
> if parentheses were supplemented with separators like ";" and "," so
> that
> f; g, h x; z
> is shorthand for
> f (g (h x)) z
I don't understand. To me the parentheses look clearer -- but I have no
clue what your semicolons and commas are doing.
> > As you note, that's a poor proof on their part -- not a
> > very good choice,
> > unless your point was that the author of Joy has made mistakes.
> That was not my intent... :-)
A good proof of that fact nonetheless ;-). I admit to skipping his proof
when reading through the document; it turns out that the proof's complexity
was indicative of problems.
> > I postulate that the purely applicative system is equivalent to the
> > concatenative system. I won't try to prove that, formally
> > or otherwise.
> I also suspect that the two approachs may be equivalent, meaning
> that every Joy-style concatenative system has an associated
> purely applicative system with analagous primitives and vice versa;
That sounds viable. That's not exactly what I meant -- I simply meant that
both notations can express the same set of functions.
I'm not clear on whether they can both express the same set of _programs_,
though; the fact is that concatentive programs can contain non-functions but
still have functionally understandable characteristics.
> Joy requires []s in cases where parentheses are not necessary in
> purely-applicative systems.
[] and () have little in common -- but the REAL difference is that the
default operator in most applicative languages is curry, and the default in
Joy is compose. It's the default actions that cause the most confusion.
> Joy seems to require functions to
> be quoted when they are used as parameters to other functions,
> while this is not the case in purely-applicative systems.
That's merely because the applicative systems don't _have_ quotation. Of
course you don't require it; you can't. You have and require other things.
> Also, thanks for your comments... I've come to see Joy and similar
> systems as worth considering (more than as just above-average
> imperative flukes).
That's actually my experience with Joy as well -- before I met it I only
knew that I liked Forth; I didn't know why. My interest had been in the
more conventional functional languages and Forth; I didn't see a connection
between the two (although I suspected one).
> > At this point I'm going to go out on a limb again and try to make a
> > distinction between the two systems:
> > The applicative system requires full information about the complete
> > "program" (I define a program as a complete set of
> > functions and data) in
> > order to run any part of it.
> Not really... a purely-applicative system could conceivably execute
> the first part of a procedure without examining the trailing part;
> I plan on making my prototype system work this way. It may
> need to _parse_ the whole textual program before beginning execution,
> but it will not need to normalize the whole thing.
The parse either consists of a complete normalization, OR the parse has full
information about the operation of every combinator (in other words, it has
to have enough information to normalize the whole thing). It certainly
_will_ have to normalize all of the text before the portion being executed,
and it'll also have to normalize any of the text afterwards which eventually
gets used as an argument.
The equivalent 'minimal parse' in Joy involves normalizing up to and
including the part being executed. That's it.
> > The concatenative system does not require this much
> > information; all we have
> > to know is the portion of the program prior to the part you
> > wish to execute.
> > In other words, the applicative system is in line with the
> > ideals of Tunes.
> > The concatenative one doesn't contradict Tunes, but does
> > allow execution
> > without complete understanding.
> It seems to me that both systems could be used to execute programs
> without complete understanding of them.
Technically true, but the partial understanding needed to execute an
applicative system is MUCH more in-depth than the partial understanding
needed for a concatenative system.
> > An applicative system only makes sense as a declarative system. A
> > concatenative system also makes sense as an imperative one.
> I'm not sure what precisely you mean by "declarative" and
> "imperative",
I've never heard anyone express misunderstanding of that particular
distinction before; it's one of the most common and deep-running
distinctions in computer languages. Could you clarify?
> but this may be an apt description. An applicative system like
> the one I've described can be used to represent all kinds of
> things, including numbers, functions, and procedures. A Joy-style
> system can only be used to represent procedures, but procedures
> that push expressions representing, say, numbers onto the stack
> can be used to emulate numbers.
This is what I meant by the distinction between imperative and declarative.
What you seem to be saying is that concatenative notations cannot be used in
declarative languages. This is false. The entire purpose of the
formalization developed in the Joy documentation is to discuss how
Forth-like languages can be formalized; the result is the concept of
'concatenative' languages, wherein each word is actually a function which
takes a stack as a parameter.
> > I think this last feature is what attracts me to
> > concatenative logic.
> > Modern computers are by their nature imperative, and many
> > of the problems
> > solvable by them are well-described by imperative processes
> > ("programs").
> I think you have again touched on one of Joy-style systems' best
> advantages: they are more straightforwardly executed on
> modern machines. I do not doubt this; but, I think programs
> expressed in purely-applicative languages may be easier to
> reason about (reasons for this will be given at the end of
> the post). With a nice metasystem, however, I suspect that programs
> expressed with a purely-applicative language could be executed
> just as efficiently as Joy-style programs. Likewise, with
> a bit of overhead, I suspect that one could reason well about
> Joy-style programs.
The reason I found Joy so facinating is that it's so easy to reason about
its programs. There's no special overhead; on the contrary, it's _easier_
than any other system I've seen. Now, at the time I hadn't seen a purely
applicative system, and even now I don't completely understand them, so it
may be that Joy has a competitor. I seriously doubt it, though; I liked J
and APL, but they don't even come close to Joy, and they are tested and
complete languages.
> So, the question of which system is superior is a matter of taste...
This may be true. I find it very believable.
> > > B(BB)B = C(BB(BBB))B
> > > The first one states the associativity of composition;
> > Ah. In that case, would it be true that:
> > concat concat == stack:(abc--cab) concat swap concat;
> > expresses the same thought in a concatenative form?
> Yes... this states the associativity of "concat". This is similar to
> composition, I suppose.
No, it's identical to composition. Don't forget that this is a
_concat_enative language. Concatenation is composition.
> > I find that to be easier to read; is that just me?
> Well... You have cheated by using "stack:(abc--cab)".
> (although I see your note below now that Joy has a primitive for it).
I've used a little of my own language -- but in that case you've cheated as
well by using nothing but your own language. ;-)
Yes, Joy has a pre-named combinator for that -- after all, it's a trivial
stack manipulation involving only 3 items. I don't think it's cheating;
it's just using a variant of the notation.
> The combinatory axiom could be made more readable if one
> used combinators other than "B" and "C" to express it;
> for instance,
> foldl 2 B = foldr 2 B
> expresses the same thought, where "foldl" and "foldr" are a certain
> series of combinators, such that "foldl 2" and "foldr 2" have these
> properties:
> foldl 2 f x y z = f(fxy)z
> foldr 2 f x y z = fx(fyz)
Sure, I could do the same thing (use different combinators tailor-made for
the problem) and get an equally simple result. The point is that it's not a
good idea to use a concatenative axiom in a proof-battle against a
concatenative language ;-). Try using an applicative axiom and watch me
fumble (I suspect).
[concat] 2 foldl == [concat] 2 foldr.
>From this and the previous statement about concat we can derive that
2 foldl == dup dip i;
2 foldr == dup [dip] cons dip i;
The RHS of those definitions sounds like a 50s acappella hit -- "do wop dip
oooooh". Anyhow, from this I could derive the definition for 'foldr' and
'foldl' themselves, except that I don't feel that I have the knowledge for
it. I guess I should try anyhow:
The most obvious translation is this:
foldl == [[dup dip i] dup dip] swap times pop;
foldr == [[dup [dip] cons dip i] dup dip] swap times pop;
but I'm sure there's a better way of writing that which doesn't use the text
of the previous definitions.
> > > for, from it it follows that
> > > B(BB)B F G H = C(BB(BBB))B F G H
> > > BB BF G H = BB(BBB)F B G H
> > > B(BFG)H = B(BBBF) B G H
> > > B(BFG)H = BBBF BG H
> > > B(BFG)H = B BF BG H
> > > B(BFG)H = BF(BGH)
> > Sorry, I'm lost. I don't know most of the combinators,
> > certainly not by
> > their ISO standard one-letter names.
> Hmm... perhaps my spacing was confusing... anyway, the only
> combinators
> I used were "B" and "C":
Ah! I see. I got confused by the F, G, and Hes. I didn't realize you were
using named variables.
> "C" takes a function's converse.
Converse? This isn't the usual use of the word, is it? I thought a
converse was a logical negation. Okay, my mistake; in that case the above
example is written
swap swap == id
or to imitate your language's actions more precisely (by requiring a
function as a parameter),
C == [swap] dip;
C C i == i;
rather than
not not == id
> > > This was mentioned above... I don't see that those five
> > > "syntactic elements" are essential in every applicative language
> > > or how they are evident in the one I've presented...
> > They were certainly present in the language you _had_
> > presented. Four of
> > them are still present in the language you've created for this post;
> > Concatenative languages still only need three.
> Let's see... here were the four you listed:
> > 1) Currying is explicit.
> > 3) Name definition is not present (but you cover it later,
> > and you don't
> > deny that it's needed); note that in the examples you give,
> > the functions
> > are simply thrown away, no possibility of reuse.
> > 4) Curry order modification is carried out by the absence of @ (as I
> > mentioned, you've only made grouping implicit).
> > 5) Parameter counts are needed to actually run/use any of
> > your functions;
> This does not really make sense to me. These seem to be just
> observations about my system, rather than "syntactic elements".
Of course they're observations about your system. You explicitly asked me
to state how the elements I had listed previously were present in your
system. If you want the list stated in the form of general rules, take the
four-item list I gave elsewhere, or take the five-item list and consider #2
and #4 the same thing.
> In particular, how can #3, "Name definition is _not_ present"
> (emphasis added) be a syntactic element?
I was pointing out that in the four lines of examples you gave, there was no
use of name definition. I was claiming further that your examples were not
representative of a faintly useful language.
> Anyway, what I said (and which you have not contradicted) was that
> combinators "are not particularly special elements in the language",
I _have_ contradicted it; I've even provided arguments against it.
> meaning that that the language does not have complicated constructs
> for referring to them.
I don't think the complexity of the construct is a measure of the
"specialness" of the element. The question is simply whether the element
behaves fundamentally differently from other elements in your language, thus
justifying putting it in a seperate class.
> However, even though the language does
> not have special constructs for dealing with them, they might
> be treated quite specially by the execution system; in this
> sense, as you have said, they are not so "ordinary".
How can the language _avoid_ having special constructs for combinators? I
don't have a clue how this can be done. You have to *at least* rewrite all
combinators down to the basis set (the combinators from which everything
else is constructed in your theory), and then you have to have special
parsing support for each of the basis combinators.
> > It seems to me that the compiler will have to be specially
> > written in order
> > to merely use many of the combinators.
> Exactly. Of course, the Joy interpreter also needed special support
> for its combinators "swap", "dup", "dip", and such.
No it doesn't -- they behave exactly identically to all other Joy words.
They take a single stack as input and produce a single stack as output. The
compiler doesn't have to even have a clue what they do.
> As you've commented,
> the only support it needed to give them is the ability to execute
> them (as atomic programs).
In other words, they are the same thing as all the other words in the
system. Pun unintentional.
Joy has a few classes of special constructs:
- definitions and scoping (syntax)
- quotation (syntax)
- words
- functional words (swap, +, etc.)
- semi-functional words (put, get, etc.)
...and others (we've talked about that before, I'm not trying to rerun that
discussion).
The compiler only has to know about the three main distinctions; a proof
system has to know about four distinctions (as shown), possibly more in
order to treat each semi-functional word appropriately.
> My system might require more elaborate
> support for the analagous combinators; but, then again, perhaps
> it won't.
Let's suppose that your system uses functions and three combinators: K, D,
and C (I don't expect that these are the ones actually needed, I don't
remember that). Your system will require:
- atomic functions and types (assuming that you don't go "whole horse" and
just define everything in terms of the basic combinators, which is certainly
possible but VERY expensive)
- user-defined functions, types, and combinators
- K
- D
- C
(and the usual "other things" we so much enjoy discussing. ;-)
Note that I listed each of the combinators seperately; this is because they
are the basis combinators, the fundamental expressions of meaning. They
can't be expressed in any simpler way; they have to be directly understood.
The rewriting rules for them _have_ to use explicit variables, and the
parser has to know all of those rewriting rules.
Oh, I'm not listing your procedural subsystem, because I have no idea how
you're going to handle that.
> > Here's two reasons to prefer a concatenative system (this
> > adds to your reasons):
> > 3) Translation to machine code is simpler than translation of an
> > applicative system. This is minor; Tunes doesn't want to
> This is a good point, and may be a true advantage of
> Joy-style systems.
> But, I suspect that there may be clever ways to compile procedures
> written as purely-applicative expressions.
I'll grant that just because Joy's simplicity is obvious doesn't mean that
it's the only possible way. I'd like to hear some of your ideas. You
discuss some in an addendum to this post, and I've split that into a
seperate reply (since this is too long anyhow).
> > practical experience
> > seems to indicate that people writing metaprograms are
> > comfortable using concatenative languages.
> Well... I don't really know much about Postscript, but I don't
> see how purely-applicative systems would have particular difficulty
> with metaprogramming. Of course, I don't know of any practical
> purely-applicative systems with flexible metaprogramming,
> since I don't know of any practical purely-applicative systems
> at all.
But we _do_ know of a few impractical purely-applicative systems, and lots
of practical (and impractical) applicative languages, and a couple of
combinative languages. Of our entire collection, the one with the most
extensive metaprogramming use is Postscript, a concatenative language, even
though there are so many more languages of the other types, many of which
explicitly attempt to be useful for metaprogramming. Most of the other
languages seldom even see metaprogramming; Lisp and its relative use it for
minor tasks, but almost never for total program generation.
> > But don't let my skepticism slow you -- one problem I have is an
> > unwillingness to work on something which isn't new to me.
> > Innovation is my
> > goal, but it's not the only good goal -- you're working on
> > refinement, and that's also a noble goal.
> I guess it could be called "refinement", although there aren't really
> any particular systems that I'm starting with to refine. Anyway,
Sure you are -- applicative systems. The single most commonly use
programming paradigm; so common that most people aren't even aware of it.
> this kind of system does seem like an interesting thing to
> work on, whether it is new or not...
Right.
> Let's see... I mentioned that I'd give some reasons why programs
> written as purely-applicative expressions might be easier to
> reason about than similar ones written in a Joy-style system,
> so now I'll give a couple:
I'll clip most of this, although its really good stuff -- this email has
gotten too long. See the Joy website for examples of reasoning in Joy; it's
_very_ effective, quite brief, and entirely legible.
> 1) In a purely-applicative system of the kind I've described,
> it is always possible to replace a sub-expression with
> one of equivalent meaning, while this is not the case
> in the Joy system, because of Joy's heavy dependence on
> a quotation construct. To illustrate, suppose in Joy one
> had a program
Not true -- as you admit below, the quotation construct has little to do
with it. The problem is caused by being able to treat quotations as lists.
If that were not possible, or if it were restricted in certain ways, there
would be no problem even in this circumstance.
However, I'm not interested in handling that restriction; being able to
automate reasoning about programs in such a simple way is a powerful
feature.
> Such a complication does not occur in an analagous
> purely-applicative system. The procedure
It would cause a "problem" if the "analogous" applicative system were
capable of modifying its own programs.
> foo (plus 0)
> could be freely rewritten as
> foo I
> regardless of the structure of "foo", if the system had
> realized that the expression "plus 0" means the same
> thing as "I".
And the exactly analogous expression "0 plus foo" could be freely rewritten
as "foo" in Joy. Don't confuse parentheses with brackets.
Note that there wasn't even a need for an 'id' placeholder.
> This problem with Joy was mentioned in the manual
> (in the section "A rewriting system for Joy"); they presented
> some possible remedies, but none which they really liked.
Yes.
> Anyhow, this is not a fatal problem of Joy, only
> a slight impediment to reasoning about programs (since
Entirely correct. It _does_ cause an impediment; the interesting thing is
that it itself (treating programs as lists) is a form of automated reasoning
about programs. This suggests that the problem it causes _might_ be
fundamental, one of the classic Godel incompleteness problems (as phrased
for concatenative languages, of course). In other words, reasoning about
programs may be easy; but reasoning about programs which are themselves
rasoning about programs is not easy.
I don't think it's that simple, though; there are a lot of list operations
which can be done which don't affect the rewriting rules -- for example,
concat and cons. The main problem comes when taking apart lists (cdr), or
doing other operations which can cause boundary conditions.
> 2) Many fundamental equivalences of Joy programs that one
> might like to show require restrictions, while
> similar equivalences of purely-applicative expressions
> can be taken unrestrained. Specifically, one might
> like to think upon seeing a Joy program of the form
This is _really_ heavily unfair. Those things aren't "fundamental
equivalences"; they're rewriting rules. Rewriting rules of that nature are
a thing used by syntax-based languages, and you honestly can't expect them
to apply to a non-syntax based one. The fact that some of them do anyhow is
almost amazing.
The actual rewriting rules Joy has are very simple; they state things like:
if "t == a b c" and "d e a b c f g == z y x" then "d t f g == z y x". In
other words, direct text replacement is reasonable.
> The analagous equivalence in a purely-applicative system
> is that
> @@@C f x y
> can be rewritten as
> @@f y x
> This can be accepted without restraint for all expressions
> "f", "x", and "y" of the system.
That's because combinators in your system have syntactic effect. In order
to do that, I'd have to define:
C == dip [swap] i;
[y] [x] [f] C == [x] [y] f
Note that this imitates the heavily syntactic approach of your system by
using the only syntactic feature of Joy (quotation). This makes Joy have
all of the drawbacks of your system while still keeping all of its own
drawbacks -- probably not a good idea ;-).
> But, overall, the purely applicative approach still seems to me
> to be the best way to found a system. But, if you can see
> any awful problems with the approach, I'd still like to
> hear, of course.
The nastiest one I've seen is the lack of ability to *do* things in the
language. You have to add another language on top.
I also suspect that even the strength of your functional language,
functions, will be very hard to read and maintain without automated help.
APL functions don't use many combinators, but APL programmers recommend
deleting the program and rewriting it based on the comments rather than
trying to understand the program. Forth programmers don't do that.
> (at this point, I'm going to begin rambling on some
> thoughts about *implementing* the system)
This looks like a good point at which to split this message. I'll reply
later, hopefully in a helpful rather than argumentative tone.
-Billy