On design processes

Matthew Tuck matty@box.net.au
Mon, 07 Dec 1998 20:36:59 +1030


Hans-Dieter Dreier wrote:

> It's not philosophical? Or not important? What you you mean?

Philosophical, sorry.

> "Make it right the first time." That's what they teach us in all those
> books about quality management. It's important for commercial
> processes.

That assumes you know everything about what you're doing at first.  The
prototyping approach is for where you don't!

> > You attract programmers by your code.

> How about a vision? Reinforced with an architecture that shows it to
> be feasable?

I guess that sort of information is strewn around the web pages and not
centralised.  A document would be useful.  As for intelligent editors,
there are a lot of issues to resolve.  It is obviously possible, but the
details are important.

> That a most fascinating document. The conclusions sound sensible.
> Alas, it doesn't say anything about the technical aspects (there has
> to be at least some form of coordination - how is that done).

There are some other essays on the web pages "open source links"
section.

> A patch can always be patched in afterwards. That's no problem. I
> rather meant the central assumptions (or lack of) that form the basis
> of the system. IMHO memory layout is such a thing: No objects,
> non-nested objects or nested objects - that may have severe
> implications on what will be possible later and how easy that will be;
> crucial changes in memory layout might force a complete rewrite.

That is true, for this and other reasons modularisation is essential. 
It could be useful to start writing some summaries of what has been
discussed?

> How about a table-driven (to be extendable) parser without any frills
> attached and an assembler-like syntax to directly produce objects and
> crude code for a most simplistic stack machine? So that it can output
> "Hello world" to stdout?   There might even exist some code somewhere > that can serve as a starting point.

I was thinking more in terms of interpreting programs stored as trees. 
I think it would be easier to start with.  The slim binaries proposal
actually seems to indicate this sort of program form can be a fairly
efficient system.

I don't know that using a low-level syntax is a good idea.  What I would
do is write some stuff in high-level code and embed the implementation
in the interpreter for initial implementation.

> You're more involved in open source development than I am, but if you
> tell me some URLs I will see what I can find. Otherwise I'll sketch a
> class layout for such a thing, but I definitely want a discussion
> before putting too much effort in it - I've got no spare time to spend
> on code just for the trashbin; it should at least be good for a
> temporary module.

Well I wouldn't say involved in open source development, this is my
foray, but I have been reading a bit.  I think we really all have to go
searching for information as much as possible.

> Examples: Have a syntax specified by tables (which are one level
> higher than compiled code) rather than hard-coded. Maybe at some later
> time have those tables filled by processing some (BNF-like ?) syntax
> description rather than by initialisation of the tables by hand.
> Generally speaking: pile architecture on architecture and use the
> highest one as far as possible.

This is a possible idea.  However, when you build parsers you have to be
careful with your grammar, since different parsers can only process
certain grammars.  LR parsers are supposedly fairly general and are
fairly adapted to the table sort of approach.  Most compiler generators
use LR parsing, so to use a parser generator might still be a better
idea.  If we're using C++, then bison is the usual choice.  It could
convert directly to the source parse tree.

> There are objects that describe the behaviour of other objects to the
> editor, the compiler and the debugger, rather than by some hard-coded
> special treatment (only where possible, of course). So that the
> language can eventually be extended by adding those objects rather
> than changing the code of the editor & compiler. Is another example of
> piling architectures (Ultra objects on top of Ultra code).

You'd be suprised how little you can reduce hardcoding to.  For example,
in Smalltalk, conditionals are done via method calls, such as

object ifTrue: dothis

You can consider the "if" statement to then be a shorthand for the
method call.  This is especially important to me, since I believe in
multiple implementations, and this allows multiple implementations of
booleans, via such things as ifTrue.  There is no hardcoding
interestingly enough, other than the if shorthand.

I'm not too sure what you mean by describing other objects, did you mean
the capabilities you were referring to before about allowing the editor
to adapt to the objects more intelligently via active method calls?

> More likely the compiler kernel (I don't think the VM kernel has much
> to do with source code). Parsing will be a great deal simpler;
> sensible error treatment as well because many errors will be caught at
> edit time or won't happen at all. I'm really fed up with the sort of
> error messages some C++ compilers generate (but to be fair, that's
> also due to the syntax of C++).
> Anyway, it's not for now - there has to be a suitable editor framework
> first.

Yes, one of the advantages is moving error detection as close as they
are made as possible.  This is very important for quick development. 
Most of the parsing work doesn't really get removed, just moved into the
editor, at least with textual editing.  In fact, with incremental
compilation, you're really merging the entire editor and entire
compiler.  One thing I learnt about in Uni was recursive descent
parsers.  They emit some strange messages sometimes, since they have
code like:

if have("if") then if_statement
elseif have("for") then for_statement
elseif ...
else musthave("while")

As a consequence if the next token is a "procedure", it says "while"
expected, whereas it really expected any statement.

Don't get this wrong - parsing in intelligent editors is very tricky. 
It is not well understood, and in that sense, we will be treading on
loose ground.  But it's a worthwhile cause.  The trick is to give most
of the flexibility of text while gaining intelligence.  It is a matter
of tipping the scales.

> Depends on what they generate. If they transform a syntax into
> functions calls directly, I'd rather not have them - too inflexible. We
> want to be able to supply different syntaxes at runtime, and they
> should be defineable in Ultra.
> If comp.gen's produce tables, to be processed by some (syntax
> independent) kernel, that would be better.

They can produce anything you want to, for example in bison there are
places to put C.  They're more like a framework.

To produce code at runtime using bison we would need to be able to
interface with C.

> But I actually thought of being dependent on the Java VM, for example -
> something that is beyond our control, but can cripple a design because
> it simply may not support things we deem necessary. Java wasn't
> designed to be a platform for things like Ultra, after all. It's
> somehow like programming everything in, say, Cobol and then have to
> switch languages in mid-project because we'd realize that Cobol doesn't
> support the things we want. (Of course, it's not nearly like that, but
> maybe you get the point).

Yes I was thinking about that, but I was never saying we should restrict
ourselves to JVM long term.  I think that we should leave the back-end
open for whatever target you want.  Some libraries would not work under
certain platforms, that's a problem for cross-platform compatability,
but we shouldn't necessarily have to write for the lowest common
denominator.  For example, any Applet library would only work under JVM.

-- 
     Matthew Tuck - Software Developer & All-Round Nice Guy
                              ***
       Check out the Ultra programming language project!
              http://www.box.net.au/~matty/ultra/