A few ideas
Dr. J. Van Sckalkwyk (External)
Fri, 5 Apr 1996 17:59:48 GMT+2
>> except that I think you're making things in the reverse order:
nice and clean definitions is up to the HLL,
while taking into account system considerations is up to the LLL.
A HLL is to define objects, heuristics, etc,
while the LLL implements representations, mechanisms.
If the HLL is top-down and the LLL bottom up, they will meet in the
If the HLL is bottom-up and the LLL top-down,
they will start in the middle of nowhere.
Well, perhaps I didn't understand what you mean.
Then please give back your examples, and explain how it differs
from either what the LLL or my Scheme-based metaprogramming
are heading for, what enhancements you see over them.
Please explain how architecture in/dependent your representation
would be, and what you expect to gain from it.
The way I see things is as follows:
1. The terms 'HLL' & 'LLL' are merely conveniences: whenever someone
is programming *anything* (s)he is merely tweaking vectors or
channelling flow of information in a sort of giant "vector space". We
are dealing with an integrated whole. Whether you say:
a = 1 OR MOV AX,1 is merely a matter of _perspective_.
2. In order to achieve a stable programming environment (whatever the
so-called "level"), we need:
2.1 Independence from the underlying machine BUT
2.2 the ability to continually monitor performance (and even
to a degree predict performance)
2.3 Minimum complexity, and thus
2.4 Robust code.
3. If we intend to produce a system which works and is stable in a
variety of different environments, we need some common ground from
which to move. Below this "ground level" things might work
differently in different systems, but above it we would have
completely reproducible performance regardless of the system, with
the caveat that we would be able to use primitive instructions ON
this level to determine the feasibility (space and/or time
constraints) of any action, if required. This is my "SYMBolic
Assembly Language level" (let's call it "SYMBAL").
Now you might argue that one could simply use e.g. C (or Scheme, or
whatever) as our SYMBAL. Yes, we could. My _feeling_ is however that
we would end up doing so many ASM hacks (whatever the system) in order
to achieve the above goals, that it would be best to define things
afresh. We would also then not have the temptation to use features of
eg C, just "because they are there", and we would not see things in
the light of the peculiarly skewed architecture of e.g. C. I am not
saying that we have to re-invent the wheel just for the hell of it,
but what I am saying is that the wheel *must* be re-invented if
all systems at present use square wheels (as I believe they do)!!
What are these square wheels? My partial list:
3.1 Featuritis. By this I mean bunging in a whole lot of "pretty"
features, without due consideration as to their impact. Apparently
innocuous "fundamental" architectural decisions (come on chaps, lets
have the following data types char, int, UINT, long int etc) may have
a massive impact on performance and complexity e.g. in the extreme
case if we choose ten data types and have 100 functions that can each
accept two arguments, each argument being of any type [perhaps a
rather silly and extreme example] then we have 100*10*10 = 10 000
cases to account for, if we had just 3 data types we would have < 10%
of these to consider.. A zillion different ways of e.g. accessing
files may give you a lot of pseudopower in the short term, but cripple
you in the long term due to costs of documenting all these frills, and
unwanted interaction between the frilly bits! We need to ask the
"What are the absolute minimum requirements for our SYMBAL, in order
to get full functionality?", and add NOTHING MORE. The answers we
get to this question may be very surprising!
3.2 The microprocessor as slave-master.
Here, I mean that our ideas of how a system should function are subtly
(and sometimes, not-so-subtly) influenced or dictated by the processor
we use/are familiar with. It is not only the uP that has this effect:
cf design cockups such as the 640K, and leaving out an inverting
buffer in the serial port of the original IBM-PC. A particular example
is the use of words of a particular width. The natural tendency is to
"Hooray, we now have 32 bit registers, let's design everything about
a 32-bit word" (or whatever). Surely we should define the
"granularity" [or whatever] that we need in order to implement a real-
life system, and THEN use the available material optimally to
implement this? Sure, we may find that 32 bit words ARE cool (or 80
bit, or whatever), but I see this sort of thing as a fundamental
design requirement that is glossed over. It is NOT difficult to
implement 32 bit words on an 8 or 16 bit microprocessor, working from
the start; it _is_ rather tricky to go back and rewrite your whole
system (or live with a cripple) when you discover that your design
philosophy is fatally flawed. Or am I being silly?
3. No/poor timing. (Timeout as opposed to timin')!
On most systems (especially those with virtual memory managment) one
often has _no idea_ how long e.g. a particular disk access or memory
write or whatever is going to take. You may think that I am going
completely over the top in desiring this, but I think not, I think
that we have come to accept this as a part of computer life, which
is in my mind unacceptable. I think that timing is _vital_, and will
become even more vital when/if we start using parallel systems more
extensively (more or less inevitable as we push the limit in terms
of cramming more transistors onto increasingly warm silicon)! I do
not mean that we need absolute precision in timing, but it's nice
to know whether something will take 1 microsecond or two seconds!
4. Poor/no parallelism. (piss-poor parallelism)!
A vital part of any system that is intended to survive into the
21st century is seamless (integral) parallel processing. This may
now only be simulated, but it must be there, robust and functional
(See above comment).
5. DISgraceful degradation!
My opposite of graceful degradation [!]. NO task should be able
to crash the whole system. Period. (Try tinkering above 2GB in Win95,
try even _looking_ at win 3.11 a bit skew).
But even more than that, I think that the tendency is often to
tack some sort of primitive "what do we do with the error stubs now
that we've written the REAL code" approach on, rather than perhaps
designing the whole system around the exceptions/errors that can
possibly occur!! Particular attention to stack overflows, attempted
writes to forbidden areas, even loading the wrong bloody register
with the "wrong" value - anyone who has played around with ASM under
Win3.11 will know why my teeth are ground down to stubs).
6. No "quantized damage"
Related to (5) above. I believe we need to define a primitive unit
of "damage quantization" : if a total cockup occurs, damage is
limited to that unit, come-what-may (i.e. all communication with
that unit is via data streams, so the worst that can happen is that
the stream is corrupted/stops flowing). [This doesn't of course
protect us from bloody silly stream handling in the communicating
process.. closely related to the concept of building the system
around errors that might occur].
But I don't know enough about your Scheme [etc]. Maybe it does
[You will appreciate that _I_ am biased by my previous experiences,
especially under Win=Loss & Dos, so I would appreciate comment].
>> That we all agree to, I think, except perhaps for the term
"high-level weenies". Most people rightfully don't have time
to bother about the low-level,
and we the OS hackers are here to relieve them from that hassle.
What if motor-car hackers decided that every car-driver
should be able to repair their car and fine-tune the motor ?
Are standard car-drivers "weenies" ?
What if cordon-bleus required every one to cook as well as they ?
A major part of the problem is the (in my mind totally unnecessary)
distinction between "people who don't have time to bother about the
lower level" and "OS hackers". As I mentioned above, this is a matter
of _perspective_. I see no reason why, in our system, _anyone_ should
not pick up a languagetool, and click on some code to view it as
one of: 1. native assembler, 2. SYMBAL 3. tinyC 4. miniPascal
5. newFortran 6. SCHEMing, or whatever!
[and it is SO STRUCTURED that in EACH CASE it is legible and clearly
annotated - perhaps I'm dreaming].
Do you call in a cordon-bleu chef every time you want to boil an egg?
Do you need a qualified mechanic to check your radiator water?
There is a difference between being "relieved from hassle" and being
a vegetative blob.
>> Are bad cooks "weenies" ?
Bye for now.