Steps
Matthew Tuck
matty@box.net.au
Mon, 01 Mar 1999 22:00:36 +1030
Hans-Dieter.Dreier@materna.de wrote:
> Maybe someone should look out for GUI and CUI packages that are simplest
> to use, compare them and then we will see. As to how intelligent an editor
> you can write using a CUI, what does "intelligence" have to do with the UI
> chosen?
A lot of intelligence has to do with user feedback, and hence interface.
> OTOH, if we use a GUI, we might be constrained by its API (callback
> routines, memory objects that do not fit well into our environment, to
> name a few).
Well we would be using the Ultra GUI capability if we wrote it in Ultra.
> IMO the editor should not be the first step, since in the beginning most
> programming will be in C++ anyway. I think it is no good idea to try to
> center development around the editor - that might bias a lot of design
> decisions away from general solutions, towards GUI support (which is quite
> important, to be sure, but not for _general language design_).
I didn't say centre development around the editor - language development
will continue. Once we have a bootstrapping capability, I would rewrite
the compiler and VM in Ultra, followed by beginning development on the
editor. The compiler and VM would need file facilities, and the editor
would need UI facilities.
> I wouldn't either, but for probably for different reasons: I would (at
> least try to) use a Windows TreeControl, try to re-feature that so that it
> supports items of variable height, try to use Windows RichEdit controls
> for the items, and that's it. But alas, then it's tied to Micro$oft. I
> wouldn't like that.
Is the reason you want to go to CUI because of the extra effort of GUI?
If so, would the necessity go away if we could initially do the Ultra
GUI capabilities with an existing GUI toolkit?
> By "translation", do you mean porting a CUI application to a GUI
> environment? That has been tried and I always found the results not really
> convincing. If the CUI were designed with that translation already in
> mind, that might make a difference, however.
No, I was referring to when we change the language and translating our
old programs to work with the new APIs to avoid legacy shackling.
>> If you don't want a feature, you don't use it.
> But you have it in your header files and libraries, and maybe have to link
> it as well.
Yeah it's going to be there, but should be able to be optimised out.
> GUI stuff IMO tends not to be as modular as one could wish.
How do you mean not modular? What examples are you familiar with?
> Sorry, I didn't express myself clearly. By "bootstrapping" I meant to
> produce *executable* code - not generated C source code.
They're the same thing. If you have C (or Java) code you can make
executable code on all platforms. That's the quickest way for us to do
that. The point is we have a reasonably fast implementation to base
bootstrapping on. It's not a permanent solution.
> IOW, the Ultra source code of the whole system goes in, and the executable
> that is just compiling itself comes out, without C compiler or linker.
> Maybe I'm biased by my Centura experience - they do it exactly like this,
> and I loved it.
Sure, it is best if we can translate to native code, but this has to be
done on many platforms. We don't have the people or the knowledge to do
this for a while. Although I imagine someone here would know a few
things about machine code for some processor and the OS interfaces.
> You may write it "monolithic" (that word smacks of dinosaurs and Microsoft
> Word, I know, which is quite intended here) and add "features" to that
> monolith afterwards. IMO that is what you're ending up with if you produce
> C code (no offense intended :)
I can't see it would be monolithic. Just as we would have an AST-to-C
converter we could write a AST-to-native converter. That converter is
only a fraction of a normal compiler.
> Or you might compose it from componentware (COM objects, for example),
> small building blocks that fit together nevertheless, because they were
> written to common standards. They are written and refined in small steps
> and you get a working environment almost instantly - even before the
> language design has been done. The top language level ("glue") would be
> some scripting language.
I have to admit to not really understanding how COM and stuff works,
would you be able to explain how it works?
> The building blocks are VMs, as you might have guessed, and the "glue"
> language is Ultra (or object loader, for a start). The MCs (and VM and MM)
> are the only parts that are written in C++ (Some day in Ultra, hopefully).
> The compiler will produce ASTs or threaded code right from the start, not
> C code. This is much easier to do and the result can be executed
> immediately, with no intervening steps.
Yes, I agree. Perhaps I should illustrate my own steps.
1a Write text to AST compiler.
1b Write simple AST VM.
2a Write AST to C generator.
2b Include necessary library features for bootstrap.
3a Rewrite compiler in Ultra.
3b Rewrite VM in Ultra.
3c Include necessary library features for GUI.
4a Write editor.
4b? Write AST to native generators.
> The quickest way: Perhaps in the short run (though I wouldn't take that
> for granted), but we would be tied to to-c for an awfully long time after
> that.
I don't see it like that. Portable code ensures we can have people
writing stuff on all platforms. Nuisances like having to have a C
compiler will only encourage us to do better.
> It also means that the language must be (almost) fully designed, and the
> compiler be ready. This will take its time.
I don't intend to stop language design anytime in the next few years.
See my legacy shackling posts.
> >I think we're coming from different backgrounds here. You're proposing
> >a minimal editor, while I'm proposing a minimal interpreter.
>
> I don't agree here! If you take a look at the VM input code sketched in
> the original posting, you will see that it is *designed* to be interpreted
> by a *minimalistic* VM. I'd be surprised if it has more than 10 lines of
> code, including comments.
10 lines? Are you serious?
> The editor is much, much bigger - and it is not intended to be minimal (at
> least not as far as functionality is concerned). If I'd want a minimal
> editor, I'd rather use a text editor off the shelf.
I meant at first.
> (I really wish you could check out Centura -
> that's my guiding star as far as editing is concerned).
Is Centura commercial? Available as a demo? I'd certainly look at it
if I could.
> But it need not neccessarily be *the* new stuff - I got some more ideas,
> and others too, certainly...
I intend to start up an intelligent editing catalogue shortly.
> Maybe *my* main motivation is to stay in control of memory layout - I
> admit that. It's just a feeling: The whole thing might get screwed up if
> we cannot properly control how memory layout is done.
I don't see it as that. But if we see that this is preventing a new
language feature then it's time for a next generation VM.
> Maybe, but that would mean very limited system run time if there were no
> way to reclaim significant amounts of memory other than GC. BTW, in an
> environment that can be designed as GC friendly as one might wish, writing
> a GC should be really easy.
Writing a simple GC should be easy. Writing an efficient GC is much
harder. Writing no GC is the easiest.
>> I would not bother working too much on the memory system until we're
>> ready to bootstrap. Once we have an interpreter, people can start
>> writing a compiler in Ultra, followed by a bootstrap.
> I don't agree here. We need a memory system to get the interpreter (VM)
> running.
I didn't say we didn't need some sort of memory system. But the simlest
sort of memory system is one that wraps around the language's system,
and the idea is to get an original implementation done as quickly as
possible.
>> A conservative GC would make our programming a lot easier. The problem
>> would disappear after bootstrapping.
>
> True. It also would make the outcome more unpredictable. It *is* possible
> to break a C++ GC if you do the wrong thing. I'm not convinced, but I also
> haven't worked with GC in C++ yet. It would be interesting to hear some
> opinions of people who have experience in this area.
Well I'm not experienced in it either, but I've heard people who know
more than me say it's not a major problem. You might want to look at
the garbage collection FAQ at http://www.iecc.com/gclist/GC-faq.html.
There's also a GC mailing list. To subscribe you can send a "subscribe
gclist" to "majordomo@iecc.com".
> Another point is that we would be stuck with that C++ GC even after
> bootstrapping (if I understand that right), since it is C++ code that gets
> compiled to machine code. I'd think that a GC built for C++ cannot be as
> fast as one that has been designed right into the memory system.
Once we bootstrap and have some native converters it will go away.
> >What I suggested was the optimiser converting several
> >language objects into one VM object. As far as the VM is concerned
> >there is one object per memory allocation block.
> >
> >By "memory allocation block" I'm assuming you mean a block of memory
> >suballocated from a large block allocated from C, or under my proposal,
> >something actually actually from C.
>
> Sorry, I'll try to make things clearer:
Ok.
>> Yes, it would be done with C code. So the point is not to have to
>> recompile code to change the initial value of the objects. What sort of
>> things would you want to change? Are you just looking for a sort of INI
>> file? If so, that would be better than implement a whole persistence
>> system to be thrown away.
> Maybe you got the intention of a persistence system wrong: It is designed
> for efficiency, to get to highest possible throughput and (secondarily)
> small files. Therefore it stores its data in binary form, preferrably as a
> memory image.
> Since it is not editable, it cannot used to change a test setup.
Maybe I misread your message, but I was referring to your "object
assembly code".
> I see. Well, since classes should be objects, there is no distinction
> between them: They are all stored in objects and can therefore all be
> manipulated using the same devices. Handling different *contents* may
> demand different tools, however.
Ok, so you want your assembly to allow code entry.
>> Why both flattening it? I personally think executing an AST would be an
>> interesting way of doing things.
> Pure execution speed. If it is laid out so that it can be executed
> linearly, that is fastest. It is also very compact, since less pointers
> are involved. But *how* the tree is *implemented* should make no
> difference to a higher-level tool where speed is not of premier
> importance, since tree classes will hide all those nasty little details
> from their clients.
That may be true, but my primary concern was really that an AST VM would
be quicker to implement, since there would be no conversion from the AST
coming out of the compiler.
> >How is this executing one object?
> If by "one object"
You made the statement "execute an object", I was just trying to
determine meaning. =)
> you mean the "*" operator, for instance, it would be
> done by the code that recognises the NULL behind the Multiply. That would
> fetch the topmost stack item, cast that to a MC and call the code pointed
> to by some member variable (which is actually payload of the referenced MC
> object).
Ok so "executing an object" means performing an operation.
> You got it! If you look at the ordering I gave in the original posting,
> you can see that the compiler comes much later.
To tell you the truth I can't see the compiler in there at all, other
than maybe "parser skeleton".
> Where do you want to store it in ?
Well it would just dump an AST whereever you told it to.
> How do you want to test & try ?
Well I figured implementing them in parallel, the testing of compiler
could be done with a AST dumper and the VM checking, the VM testing
could be done with the ASTs from the compiler.
> Of course design is parallel, but implementation is another issue.
I can't see any reason why my way wouldn't work. The compiler is
probably logically completed before the VM. Earlier stages are usually
completed first in a compiler. Since you need it right for the input to
test the next stage.
> I wouldn't like a parser generator. It would involve yet another step in
> the pipeline from input to executable code, yet another tool, a more
> complicated build process. And what for? If you take a look at the code I
> submitted as a parser skeleton, it should become clear that it ought to be
> pretty fast - like VM - no need for a hard-coded parser here - at least
> not yet. And since it relies on components (MCs), my comparison concerning
> "monoliths" applies here as well...
Perhaps you're misinterpreting me here. Parser generators generate
parsers, and hence are not a part of the compiler. They're tools so you
can change the parser easily.
> I'm not sure whether I understand right. Do you propose different
> languages to be used in parallel, to support the different tasks that are
> to be done in one project?
Yes. It's a tree structure. Check out my "Translational Hierachy
Framework" message. You must have missed it. =)
> I think I wouldn't do it that way. Although I'm a fan of using components
> for functionality, that doesn't apply to the user interface. The user
> interface must be as clearly laid out and uniform (yeah, monolithic) as
> possible. Having to use different tools that have different interfaces and
> that inevitably do not work together well makes me sick.
Well, it's not going to be "just one Ultra", but the idea is to make it
easier. If it doesn't, you wouldn't use it. Ideally the programmer
could specify the translation hierachy.
> Nice idea. I didn't see it that way. Hey, it would be possible to write an
> extendable programmable video game for the kids, not just point-and-shoot.
> That is what I always looked for but never found.
Frameworks like this are already quite possible. So are wizards and
stuff. So is allowing the programmer to extend their development
environment. I'm really just putting them altogether.
> Both. While developing, you got all in memory, as objects. You do not
> start "manpages" or "winhelp" to get at your docu. Its already present as
> outline items. It's seachable as well, and all the links are there
> (provided by the structure of the source). It may be true that many
> existing systems have complete integration of the programmer's reference,
> but certainly not all of them.
Any system that claims to have a decent debugger can show the source.
> Look at VC++ which I does a nice job as far as documentation is concerned.
> Even they have problems: You mark the keyword "IUnknown" and hit F1. Up
> pops not the section you wanted to get to, but rather a dialog which
> prompts you to select among a lot of alternatives, for each class that has
> such a member. Because it doesn't know a thing about the *context*. You
> even get Java stuff when you are actually using C++. If you try a keyword
> that you defined yourself, you get no hits at all. If help were really
> integrated, you would get to the right place instead. If there is any docu
> for that item, of course.
Can you explain a bit more our what you mean help for item here. Are
you referring to language help or program help?
> Linking on the fly is possible, of course. But I didn't mean *link*, I
> said *compile*. That makes a difference. You can only link pre-packaged
> things, but you compile *source code*(i.e. plain text), hence you have a
> lot greater expressive power.
True but the way Java works, actually taking potentially foreign code,
is useful. For example, any view is foreign to the editor, and should
only integrate through a specific interface.
--
Matthew Tuck - Software Developer & All-Round Nice Guy
mailto:matty@box.net.au (ICQ #8125618)
Check out the Ultra programming language project!
http://www.box.net.au/~matty/ultra/