Programming project

Tanton Gibbs thgibbs@hotmail.com
Wed, 02 Sep 1998 14:26:03 PDT


>So here we're getting back to making like constructs look alike?  Does
>Perl allow do until and repeat while?
  Yes, it does.

>Ada has the same thing, although it looks to me from your code that 
>Perl
>uses open form construction, while Ada uses closed form >construction. 

Just for the record, I like closed form construction.  I think it is 
neater and more readable than open form.


>My real qualm with the way this is done in Ada (and in Perl?), is that
>the "exit if" form is only for the exit statement.  WHY?  

I don't know about Ada, but I know Perl allows if, unless, while, and 
one other modifier after any simple statment.  so, you could say:
X if( Y );
Z while( A );
B unless( C );

I'm not really fond of this, because there are already structures that 
do this same thing, but it is more like the english language and I can 
see some( if little ) advantage in allowing it.

> both
>object oriented and imperative languages are quite complete given
>adequate primitives.  
  I meant the design and analysis of systems based on the object 
oriented approach was still under development, not the language concepts 
themselves.  The waterfall method, etc... of structured programming, 
however, has been around long enough to be practical and easy to follow 
where as many of the prototyping/rapid prototyping systems of OOP are 
relatively new and many programmers are scared/unaccustomed to them.


>I'm not sure how this prevents unused submodules from being compiled in
>though.  When you use import it is invariably to import a whole raft of
>submodules/classes whatever (using * in Java), otherwise there would be
>way too many imports.  

I don't mind importing submodules, but I think that at the end, the 
functions that are used, along with the data they use should be 
extracted to a common obj file that will greatly reduce compiled size.  
In other words, if the function is not used, its compiled code is not 
included.  I realize this may be a pipe dream, but it is worth a shot.

>I've talked about free methods, but are there any other way you think
>OOP restrictions you?

Well, the design considerations of the OOP paridigm are completely 
different, and one does not easily go from one to the other.  Although 
it may not sound like it, I'm a big fan of the OOP paradigm and I write 
most of my programs in an OO fashion, but when someone hands me an 
iterative document sheet, then I usually can write the program 
structurally faster than I can redisign the project in an OO fashion and 
then code it.  Sometimes, speed outweighs quality.

>> >Thirdly, that value (1 in the previous case), might not be 
statically
...etc...I'll address this issue in another letter.  C++ does it by 
using binders...I'll research that solution more.
>"I think a true class is overkill in this situation" reflects you are
>worried about one of two things.
>
>(a) The syntactic inefficiency of class definition.  You can use the
>free method shorthand.
>(b) The inefficiency of class implementation.  All of this can be
>optimised out if unnecessary, and should already be happening in modern
>OOP compilers!  Not just in this situation, but all situations where
>classes are being used.  Pity it's not to my knowledge.

I am worried about a, but I don't necessarily want a shorthand, if I 
want a free method, then all I want to think about is defining a free 
method, not a shorthand notation.


>I know namespaces are a fairly recent C++ feature, but I'm not familiar
>with them yet.  Would you mind briefly explaining them.  Are they any
>different to singular objects?

A namespace is just a collection of objects in a newly declared scope, 
thereby not cluttering the global namespace.  They are very similar if 
not exactly like singular objects.

>> I don't believe that I have ever seen this proposal for strings.  The
>> reason being is such:  How do you do the underlying implementation of
>> the string interface?  I would guess through a vector of characters 
or
>> char* thereby we are back to the C way...my personal favorite :)
>
>Actually, no.
>
>The string interface doesn't have an implementation.  The interface is
>implemented as an abstract class and the implementations as concrete
>classes, but they are not shown that way to the programmer.  There are
>two ways of viewing this sort of scheme - what I think you're referring
>to here, having a class B where implementation conversions go A->B->C,
>or going directly A->C.  This is done with the "Builder" design 
pattern.

I'm just wondering how the string class is finally implemented, using an 
array of characters, or what?  Whatever the final implementation is, 
then we should support it as a kind of string too.

>The alternative is to place this information in the class itself.  Now
>I'll admit that placing a preferred implementation in a class (which,
>remember is abstract), before the implementation is even declared is 
not
>perfect, but it seems to me to be the best idea.

I agree, the idea of placing a preferred implementation in the interface 
is a good idea.  It is almost like a default template in C++.
for example
template< class T = int >
class Stack{ ... };

Thereby making a stack class that will normally use ints, but can be 
changed to use anything.  The interface could be used much the same way
class String( Abstract, default = StaticString )
{ ... };

etc...***please ignore the syntax, I'm trying to communicate the idea in 
a more visual way***

>> I don't know, I haven't looked at it, but I wouldn't imagine 
optimizing
>> at the compiled level anyway, naturally an intermediate language 
should
>> be formed, either a trinary or quatranary stack language to easily
>> optimize similar statements and then an AST to handle data flow and
>> larger structures.
>
>Yes, but I think to a degree you have to optimise at all levels
>(peephole opti on low level code).  The only problem is if the low 
level
>opti reveals another opti that could only be done at the higher level. 
>For these reasons the opti should be done at the lower level as well if
>possible, but might not be as easy to do.
>
>Having two representations of our program might make this a problem.  
Or
>instead of in sequence, you could maintain them in unison.  That would
>be interesting!
>
>You might have realised I'm in favour of optimisation almost at any
>cost.  I figure that the oft touted tradeoff of run time vs.
>compile time is not really as significant as some people would have you
>believe.  When developing you're not too concerned with run time
>efficiency, but you compile a lot, so you're concerned with compile 
time
>efficiency.  While when compiling your final build, you're not too
>concerned with compile time efficiency, since the final build is
>important, but are extremely concerned with run time efficiency.  So
>different optimisation options are necessary for different times!  A 2%
>increase in run time for a 1000% increase in compile time IS worthwhile
>for a final build (wherein we'd bring in some of the more rare
>optimisations).

I agree with this completely.

>
>This of course means that profiling could be potentially misleading - 
it
>must always be done with the final build options on.  

In fact, this is what is spawning a new generation of 
compilers/profilers that do runtime optimizations based on program runs.

>
>I'm not too familiar with the triple/quad representations.  I know what
>they are, but how do they compare to ASTs in terms of ease of use in
>discovering and performing optimisations?

They differ in there applicability.  They are more interested in finding 
memory savings and eliminating temporary variables as well as when to 
schedule nearby operations( for example, certain operations on the 
pentium can be paired and should be placed next to each other ).  They 
are important, yet different from ASTs.


Tanton

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com