[Fwd: low-level approach]
Tue Jan 15 13:05:02 2002
If you didn't intend this to be private, please forward this message to
list -- I'll leave in all your context. (You replied to me rather than
the group.) I do that all the time. :-)
From: Kyle Lahnakoski [mailto:firstname.lastname@example.org]
>Billy Tanksley wrote:
>> Perhaps we're using different definitions. "Application" is
>> how C, Scheme,
>> Haskell, FORTRAN, and almost all existing languages call
>> functions. There's
>> no reason to curse about it, or if there was, it's relevant to all
>> applicative languages.
>My apologies. I thought applicative programming was the use of Haskell
Understandable -- I had a fit working with those as well.
>> >I do not see any semantic advantages in Haskell that can not be
>> >found in other (simpler) languages. Haskell is an
>> >excellent example of
>> >a language that can be harder to read then Perl!.
>> Surely you're speaking of a completely different language
>> than the one I
>> know! Haskell is almost bitterly clean and uncluttered. It
>Monads are not simple or clean. I guess that is a subjective comment.
No -- you're absolutely right. They're ugly. But monads are not
You claim that Haskell has no advantages over any other languages, when
fact Haskell merely has ONE disadvantage. There's a HUGE, critical
difference there! We need to learn from Haskell.
>> >Forth is much more rational, but I disagree with any stack, list or
>> >"concatenative" concepts, so it may be some time before I give
>> >Forth any respect. ;-)
>> I hear you, but I don't understand. Stacks and
>> concatenativity go together
>> (I've never found a concatenative language which didn't use
>> a stack as its
>> main element), but lists seem unrelated.
>It is all about needlessly ordered data structures.
>The whole idea of order-is-bad is still unrefined in my mind, therefore
>I am in the state where i believe it so, but have little empirical
>evidence to convince others.
The idea of names-are-bad is very clear and precise: it's presented
time you look at a language which attempts to use the lambda calculus in
functional way, and fails because computing requires imperative actions.
I'll present order-is-good later.
Order is good, so long as the compiler can override it after analysis.
Names are bad, because they have nothing to do with the compiler; they
merely add another job.
Of course, I'm exaggerating. Order isn't absolutely good; I like
things. Names aren't absolutely bad (APL is the exception, not the
and it proves the rule by helping the user use names). My point is that
can't afford such a dogmatic view.
>> >If you are well versed in these two languages maybe you can see some
>> >aspects that are useful. My apologies, I have tried to
>> >look, and I see
>> >only academic novelty. I note some points you have made below.
>> Forth is no academic novelty whatsoever; nor does Postscript. Yet
>> Postscript is the most commonly used metaprogramming
>> language anywhere; odds
>> are your computer generates a custom (simple) Postscript
>> program every time
>> you print a page. That's not a mere academic novelty.
>> If anything, the novelty is that the academics know NOTHING
>> of Forth and Postscript.
>I did not mean to say that meta programming, or postfix
>notation, was an
>academic novelty. I meant to say that Forth's particular concatenative
>style appears strictly academic to me.
Then why on earth isn't Forth used in any college classes? It's ONLY
by engineers, almost none of them CS people. How could you call that
academic? I really am honestly confused here.
>The links you referenced me to
>only supported my point; the academics love combinators for
That ... that wasn't the point of the paper. I don't know what to say.
>I see your point that the combinators certainly have the potential to
>minimize code length, but that is not expressiveness. If strict code
>length defined expressiveness then zipped source code would be
>considered as having superior expressiveness. For an language to be
>expressive it must be able to optimize the balance between code length
>and human readability.
Blink. Wow. You have an AMAZING ability to miss the point. Please
me for saying that. Let me try to express my point... It's going to be
hard, because I don't understand how you could be missing something
seems so obvious to me.
First, a very short rebuttal: code length has absolutely nothing to do
this. I don't know where you got that from, since combinators by your
declaration add words to code, not subtract them (this isn't strictly
but it's a good upper bound).
What combinators _do_ is make dataflow clear, and computing is all about
data flow and modification. Combinators allow the programmer to
that. Anyone who's ever used them recognises the feel of programming
them: ideas just flow out, and often don't need to be corrected; changes
have only local effect, unless certain dangerous words are present
with deep stack-effect), and those become obvious with a little
or watchfulness. Compare this to naming-based code: you don't know what
things any specific change is going to affect, because the effects
the scope of the variables affected by the change.
>> >But in general this
>> >concatenative property is not too useful, maybe even a
>> >hindrance. For
>> >example, the concatenative aspect of Forth forces the existence of
>> >unusual twiddling operators, much like those found for stack based
>> >machines (dup, swap, ...). These twiddlers have nothing to
>> >do with the
>> >work being done, and everything to do about the format the
>> >program is in.
>> Untrue on all points. Read about combinatory programming
>If you believe that removing named variables is a good thing, then I
>guess I see why you believe my statements are untrue on all points. We
>are having a fundamental difference opinion: I believe in named
>variables, you believe in minimal code length.
I don't give a flying fig about program length. I don't know where you
up with that.
I *definitely* do not "believe" in named variables; in fact, although I
them in languages where their use is required or when they make my code
better, I emphatically recognise the problems they cause, both for
modification and for program analysis.
Also, your statements are untrue not because I believe they're untrue,
because the twiddling operators (I like that name, BTW) are a crucial
of the work of the program: they express the dataflow.
>I believe named variables are the right thing because our human minds
>are well suited to mapping names to concepts. From the human
>perspective named variable require little mental force.
Constant names require little force. Variable names are a HUGE concept,
which take a LOT of processing. context switches, such as what happens
whenever you "step in" to a function call, are even worse: all of a
all the variables you were used to are gone, and a totally new set takes
>Minimizing code length is quite the opposite,
I won't harp about this again, but WOW. Straw Man.
>the discipline needed to hold complex
>dataflow design in the mind is much greater.
I must point out that complex dataflow is complex regardless of whether
expressed by variables or by explicit dataflow notations. And with
variables, the dataflow MUST be held entirely in your mind; it's not
possible to write it in the program. With dataflow, the flow only has
thought of once, and from then on you can forget it -- dataflow
modifications can be made locally.
>> to see how those 'twiddlers' are significant to all programming methods,
>> just stack-based ones.
>I read the pages, but I missed the part that referred to other
>> Once the programmer's been forced to look at the dataflow, the program
>> be written with the dataflow in mind, and the result will be a much
>This statement of yours exhibits the force the human mind must endure to
>build a Forth program.
And yet people who write Forth programs talk about how easy and fun it
Perhaps you misunderstood my statement, then.
>> Furthermore, because the programmer is thinking about the
>> dataflow, the program will be organised so that the most urgent items are
>> the top of the stack, and the less urgent items are underneath. In other
>> words, the stack is sorted by urgency. From there register allocation is
>> trivial: just assign the topmost stack items to registers, the next ones
>> a fast cache, the next ones to a slow cache, the next ones to memory, and
>> the very lowest may be paged out (if needed). The implications for
>> optimization are huge, because all this information is stored in a very
>> shallow way, while it's almost impenetrable to applicative languages.
>All this optimization you mention must be done by the human programmer.
>This optimization should be done by the compiler. If you remove the
>necessity to specify how a program will run, you are left with much less
>to specify and and easier development time in general.
This optimization MUST be done by the programmer: he's the only one who
knows what his task will need next. The compiler can make guesses after
undertaking extensive analysis, but will never better the programmer;
compiler is better used as a domain expert on the specific optimizations
needed for the target machine (alignment, caching, number of registers,
so on). After all, ordering the operations is an obvious and trivial
of designing the algorithm.
Some operations don't have to be ordered: but these are either
due to the structure of the algorithm (consider many of APL's array
operations), and primitives can be provided which operate on arrays
potentially out-of-order; or they're unordered due to dataflow and side
effects, both of which are obvious in a concatenative language to even a
VERY simple compiler.
>> >On the other hand parse tree of any other language is just as easy
>> >to split into sub tokens. Although the price of a parser is needed to
>> >get that parse tree, I think it is a small price to pay for the extra
>> >textual flexibility.
>> Again, wrong. Even after you've done the parse you have less textual
>> flexibility. You also lack dataflow information.
>As above, dataflow information does not have to be specified because the
>compiler can provide it. Why specify more than you must?
Because the dataflow is intrinsic to your algorithm: you ALWAYS specify
even when you're not making it explicit.
You're also missing what I said above: a parse tree doesn't give you any
more textual flexibility than a tokenized concatenative language
has (sometimes a lot less), and it's harder to get and apply. This is
it's taken so long to come out with decent refactoring tools.