Philisophical musings: bondage and discipline, and magic
Wed, 15 Sep 1999 02:34:54 -0600
I've heard Perl developers refer to some languages as "bondage and
discipline" languages. This term refers to the stringent requirements
some languages place on the programmer through their emphasis on
orthogonality. For every task, there's (supposed to be) exactly one way
to do it, and if you make even a small mistake, your program won't
Perl advocates scoff at B&D languages, chanting their "There's More Than
One Way To Do It" slogan. But I disagree. Thomas Jefferson will be
rolling in his grave to hear me say this*, but I prefer bondage over
freedom for one compelling reason: it's safer.
Perl is a very flexible language. You can store both integers and
strings in the same variable, and Perl will automatically handle them
correctly. A B&D language like Java would never let you get away with
that. Perl will also let you use a variable that you haven't declared.
In fact, there are no declarations. A B&D language forces you to
declare every variable you use. What a pain! ...Right?
Well, no. I work with both Perl and Java (a B&D language) on a regular
basis. Except for those rare cases when I'm converting a string to an
integer (or vice-versa), Java's bondage isn't a hindrance. On the other
hand, Perl's loose requirements always are. When I make mistakes --
something that happens all the time -- Java catches most of them when I
compile. Perl, on the other hand, doesn't. It only catches them when I
run the program, and even then only if the problem spot actually gets
executed. This is especially troublesome for me because I'm developing
web applications and the error messages don't show up in the web
browser, so I'm not always aware of the errors.
Let's put the question of bondage and discipline aside for a moment and
talk about magic. Specifically, magical compilers. Magical compilers
are cool... they can automatically figure out what you're doing and take
the appropriate action. As a simple example of magic, most compilers
(ironically, B&D compilers) feature automatic coersion of types -- so if
you say a=b and 'a' is an float and 'b' is an integer, the compiler will
automatically coerce 'b' from the 'integer' type to the 'float' type.
That's such a simple example of magic that it doesn't really qualify.
REAL magic is jaw-dropping. You see it and you say, "My God! How did
it know to do that!?" Unfortunately, I can't think of any good examples
right now, because I'm tired and I avoid magic whenever possible.
What? I AVOID magic? From my description thus far, you might think I'm
a huge fan of magical compilers. But I'm not. Magic is a primitive
form of AI -- the compiler looks at the code and guesses what the
programmer want. The problem is that it doesn't always guess right.
Actually, I'd go far to say that it usually guesses wrong.
The result? Major frustration. The point of magic is to make the
programmer's life easier by allowing him to leave out tedious
declarations and other code. But the actual result is that the
programmer spends far more time trying to figure out why his program
doesn't work the way he thinks it should. In the long run, he either
stops using magic or develops an intricate mental model of how the
Look at that last sentence again. In order to make magic work, you have
to develop an intricate mental model of how the compiler works. In
other words, magic makes languages HARDER to use, not easier, by
increasing the complexity of the language. The complexity increases
because all those magical elements interact in unpredictable ways. C++
and its interactions between templates, operator overloading, and the
'const' keyword is by far the worst offender in this regard that I know
So, what's my point? It comes down to this: When we program, we're
undertaking a complex mental endeavor. It's so complex that no real
program, not one, not even the hugely expensive and carefully-engineered
space shuttle software, is free of errors. We use programming languages
to reduce that complexity. When designing languages, we need to keep
this goal firmly in mind: The sole purpose of a programming language is
to make programming easier.
Some languages try to make programming easier by reducing the work
required to type in the code. Hence anti-B&D languages like Perl (which
don't require language declarations) and magic languages like C++ (which
figure out things for you so you don't have to type them in).
But the amount of work required to type in the program is miniscule
compared to the work required to understand, debug, and maintain it.
So, in my opinion, the best languages of the future will require more,
not less, of the programmer:
1) They'll require as much information as possible to be declared up
2) They'll require the programmer to explicitely specify every
operation, with a few commonly-used and well-defined exceptions.
But the above requirements will have significant advantages:
1) Most errors will be found by the compiler at compile time, not by the
user at run time.
2) Language semantics will be simple and clear, with no hidden
As a result of these features, programs written with these languages
will be less buggy, easier to understand, and will take less time to
develop than their counterparts.
*I think Thomas Jefferson is the person who said "Those willing to
sacrifice freedom for security will get and deserve neither."