low-level approach

Kyle Lahnakoski kyle@arcavia.com
Mon Jan 14 05:32:10 2002


Brian P Templeton wrote:
> 
> Kyle Lahnakoski <kyle@arcavia.com> writes:
> > For instance, I doubt that Lisp is a superior language to Java or even
> > C. I would rather accept that it has certain aspects that make it
> > superior.  The separation of a language into it's semantic aspects 
> > allow the TUNES community to see the benefits of these aspects and be
> > able to construct a better language that contains them all.
> >
> If Lisp (what dialect? Common Lisp? Emacs Lisp? Dylan? Scheme? EuLisp?
> ISLISP? XLisp? XLisp-Stat? ...) has ``certain aspects that make it
> superior [to Java or C]'', then it is /ipso facto/ superior to Java or
> C!

Please choose any version of Lisp you want and tell us about an aspect
of it that makes it better than Java, or C, or any other language of you
own choosing.


> However, you later seem to indicate that you mean that no language is
> syntactically superior to another - but you fail to define `semantic'.
> Where does syntax end and semantics begin? By the time you mention
> that Lisp doesn't use traditional-style infix syntax (where a few
> operators are infix and function calls are prefix), but rather uses
> prefix syntax, does that demonstrate that Lisp isn't as strongly
> biased toward certain operations? When you explain that Lisp programs
> are not simply streams of characters, but rather print representations
> of objects that are read back in and evaluated, and therefore Lisp
> programs may be considered sequences of objects, does that imply that
> metaprogramming is easier with Lisp than the other languages? If you
> want to discuss semantic aspects of languages, then define `semantic'!

Choose your own definition and provide it here.  If you believe that
infix, prefix or postfix notation are in some way better then explain
why.  If you can provide a measurable difference between
Lisp-programs-seen-as-streams-of-characters or
Lisp-programs-seen-as-print-representations-of-objects then please do
so.

When it comes right down to it, all Turing-complete languages are
identical in expressive power (e.g. implement a compiler in language A
that handles language B).  We only have our intuition to understand what
semantics really are.  Now if you believe that my compiler example is
contrived, I will say that I am surprised how often some C programmers
use to lexx and yacc to make small language parsers.

Below I have my attempt to define semantic equivalence, but it only
manages to show that all turing complete languages are semantically
equivalent.


> (assuming you're talking about Common Lisp here)
> >       All compound objects are treated as lists.
> How do you define a compound object? Something other than a symbol,
> number, or character, perhaps? Then you're incorrect.

How am I incorrect?  List of lists are still lists.  A simple example
would be appreciated.  Records and vectors, of what I have seen, are
implemented using lists.


> >       Function references, using quotation, are available.
> Huh? Functions are first-level objects in CL.

Forgive my poor nomenclature, function references, via symbol or
explicit symbol list (e.g. function definition), is how Lisp is able to
make functions first-level objects.


> What? Lisp doesn't `naturally' handle strings. Lisp, to me, seems to
> naturally handle sequences (lists, arrays, etc), dictionary-type
> objects (alists, hashtables), and functions.

Good, take that one step further and see that strings are lists of
characters.  Lisp has many list manipulation functions, and therefore
string manipulation functions.


> > The last, when combined with the second may be something significant.
> > It allows for dynamic specification of functions.  But it has been my
> > experience that most dynamic function specifications are simply
> > generic programming with partial evaluation.  C does this easily 
> > (albeit inefficiently).
> >
> I haven't yet seen lambda in C, but I may have missed some language
> feature that makes it possible.

Generic function specification with (apparent) partial evaluation can be
done using the usual procedure definition in C.	

We both agree that lambda is more powerful than the C function pointer
in the extreme case of pure meta-programming.  But it is the simpler
cases (where lambda is used more often) that C, and its function
pointer, can perform as well as Lisp.  


My post was a request for Lisp examples contrary to my statements.  I
already anticipated your dismay, but abstract descriptions of language
superiority are only useful between people that agree.  Since I do not
agree, all this high level talk is for nothing; we will never reach a
consensus without something objective and  measurable, namely concrete
examples.

It really well could be that this discussion is for naught; there is no
semantic difference between languages.  This would help us understand
why the bulk of the programmers are sticking with C, C++, Basic or
Java.  It could very well be that language preference depends wholly on
the number of, and quality of, tools that support that language.  




//MY ATTEMPT AT DEFINING SEMANTIC EQUIVALENCE

We will define semantic equivalence on two programs A and B if there
exists a modified PDA (call it an "IO-PDA") that can convert a program A
to program B.  The PDA modification is a simple addition to the
transition function so it can write output.

<TECHNICAL>
Formally, we have a definition of a IO-PDA

	M = (Q, S, Z, d, q0, z0, F)
where 

     Q   is a finite set of states, 
     S   is a the input/output alphabet, 
     Z   is the stack alphabet, 
     d   is a transition function, 
     q0  is the initial state (member of Q), 
     z0  is the stack start symbol (member of Z) 
     F   is a set of final states (subset of Q). 

All the definitions are the same as a regular PDA except d.

Let S' = union(S, {null}).
Let Z' = union(Z, {null}).
Let Ai be the ith symbol of input A
Let Bi be the ith symbol of output B

     d: Q x S' x Z' -> Q x S' x Z'

	(q', s', z') = d(q, s, z)
     
For any state, input symbol, and stack symbol, d defines a destination
state, an output symbol and new stack symbol.  The machine is "run" by
passing the triple {q0, A0, z0) to d and receiving its result.  The
result is then used to determine the next current state (q1), the output
symbol, and the symbol to be pushed on the stack.  Each subsequent step
will pass the current state, read the next input symbol, and pop the top
symbol off the stack, to d and receiving its result, and so on.

Some d-transitions do not read anything from input and look like:
	(q', s', z') = d(q, null, z)
Some d-transitions do not pop any value off the stack and look like:
	(q', s', z') = d(q, s, null)
Some d-transitions do not write any output and look like
	(q', null, z') = d(q, s, z)
Some d-transitions do not push a value onto the stack and look like
	(q', s', null) = d(q, s, z)


Because of the functional nature of the IO-PDA we can take any string A
composed from alphabet S and define M(A) to be the output string, given
input string A. 

</TECHNICAL>

Anyway, we can define two languages X and Y to be semantically
equivalent if there exists an IO-PDA, M, such that for any program A in
X there exists a program B in Y such that B=M(A).

We can see that infix, prefix and postfix notations are syntactically
different, but semantically equivalent.

This also means that if a few library functions can improve ... and it
is here that we realize all turing-complete languages are semantically
equivalent under this definition.