[OT] We need a language

Alaric B Snell alaric@alaric-snell.com
Fri May 30 09:07:02 2003


>>>"True greatness is measured by how much freedom you give to others, not
>>>by how much you can coerce others to do what you want. "
>>>Larry Wall "Perl, the first postmodern computer language"
>>>http://www.wall.org/~larry/pm.html
>>
> 
> This doesn't change the fact that perl sucks :P
> 

I'd question any definition of greatness that funnily seems to focus on 
the characteristics that the author is famed for!

> I don't see why we can't have both. For example, on the static vs. dynamic
> typing argument, we could have the implementation attempt to infer type, but
> if it failed, output a warning and dynamically check types; an option to
> enforce (even explicit!) static typing could be available.

At last! Somebody who agrees with me, seeking to unify both camps rather 
than pitch into the fight! :-)

> Both approaches have their strengths and weaknesses- the obvious is that the
> "artistic" tack allows more creativity and flexibility, and is therefore
> infinitely superior for prototyping. However, I think it's even a good idea to
> (given a sufficiently development environment) to start a project in this
> "artistic" method, and perhaps as more requirements/uses surface and a clear
> niche for the program surfaces, switch gradually to a more statically verified
> method, ending up with a mature, solid, bugfree project with a minimum of work.

Yes, that's a consequence (as I see it) of the 'Sensible Defaults' 
thing. It also affects information that affects performance; quite often 
there is a tradeoff between generality and performance in a compiler's 
implementation of something, and telling the compiler "This value will 
never be NULL" or "this operation will never fail" or "this variable 
will only ever take one of three values" allow it to cut out a lot of 
run-time checks. So if the 'sensible default' is the most general one 
with all the checks in, the developer can go back later and fine-tune 
things by adding more restrictions, or by tweaking "hints" to 
optimisation algorithms ('this branch will be taken more often than that 
one' and so on).

> This I think is far superior to the "engineering" approach because no matter
> how much planning is put into the design of a project, it will not be
> sufficient; flexibility is of the utmost importance early on.

A lot of the literature on software engineering incorrectly assumes 
total knowledge is available to begin with, so the designer produces a 
single monolithic system that precisely meets those needs...

I'm considering formalising and writing up an approach that's not based 
around splitting the work into modules, as is the norm - but splitting 
it into *interfaces*. The interfaces are what define a system; the 
modules just implement the interfaces. If you design good interfaces, 
rather than just writing a layer of functions on top of something and 
then exposing those (which tends to produce an interface that carries 
assumptions about the structure of the implementation beneath - thus 
constraining future evolution of that implementation), then you'll end 
up with a much more flexible and better-modularised system.

IMHO.

> The important thing here is to give the programmer freedom, so I agree with
> Larry Wall in that respect. But freedom implies not the absence of rules, but
> rather their optional presence.

Yes! :-)

> -Jeff Cutsinger

ABS