JVS & Fare blather on..

Dr. J. Van Sckalkwyk (External) SCHALKW@odie.ee.wits.ac.za
Mon, 8 Apr 1996 13:27:22 GMT+2


Dear Fare

In reply:-

>>
By writing "MOV AX,1", you preclude such further optimization.
<< 
No. It merely gives us a common level on which to work. For 
example, some processors allow a test bit & reset, others 
don't. Does this mean that we must have three SYMBAL instructions
(TESTBIT, RESETBIT, TEST&RESET), or never use the last one?
Of course not. The SYMBAL coding on any machine is e.g.
        TEST  foo,1     (or whatever)
        RESET foo,1     
The device-dependent level below this implements this any 
way it wants : the important thing is that it gives us a
COMMON REPRESENTATIONAL LEVEL on which to work. The
lower level need not implement things on a 1:1 basis!
(It might, initially).

>>
[I assume "monitoring" means being able to tweak and modify things,
not just looking at how long its takes for the program to run].
<< 
No. Not immediately. I want 
(1) To be able to monitor and within limits predict performance
(in the final analysis: "is it worth running this program on
this machine, or should I just go and have a nice refreshing
cup of tea"), and
(2) Yes, it would be nice to be able to say "this strategy just
isn't efficient on this machine, let's try the following.." But
this is a secondary goal.


>>
Here is the next computing revolution.
The internet is the medium of fast (humanwise) world-wide 
communication.
So it vehiculates[?] information, not noise,
which is what consistency means,
we *need* such metaprogramming environment.
<<
Hmm. Every time I potter around the internet I cringe. Many 
participants can't even spell, let alone communicate rational
thought. If this is the next revolution, include me out. My
"bricks of shit" metaphor is, I think, appropriate.


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Me>>
If we intend to produce a system which works and is stable in a
variety of different environments, we need some common ground from
which to move.
You>>
This "ground", to be, can only be the HLL. Because people
[snip, snip]
Me>>
Now you might argue that one could simply use e.g. C (or Scheme, or
whatever) as our SYMBAL.
You>> I won't. Firstly, C is a stupid language to me,
Scheme an exclusively high-level one.
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Whoopsie. Do I detect a contradiction here?
Do you agree that there must be some common level
or are we back to the same unresolved argument of a year ago?
If NOT, I feel I have nothing to contribute.
If the common level is e.g. Scheme, then you presumably have to
create a virtual machine on EACH system that 
(a) uses Scheme as its "native" language;
(b) can perform ANY & EVERY O/S task in the context of (a);
(c) Needs to be re-developed from scratch for every new
machine that comes along. I do not know much about Scheme
or its primitives, but I bet you are going to shit if you
try this one on!


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Me>> completely reproducible performance regardless of the 
system, **with the caveat** ..
You>> You may enforce performance *warranties* (subject to hardware 
failure),
that will be valid assuming the according resources can be reserved 
to you,
but don't expect "reproducible performance".
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Perhaps I phrased that badly. I am not so consummately naive as
to believe that a 6502 will perform like a Pentium (hence the
caveat). What I meant was that the two virtual machines will be
functionally identical (do the same thing) but not (of course)
necessarily in the same time! That's why I want the system to 
give me some indication of how long things will take! I also
want reasonable certainty that the bloody program will run in
the available memory.. (or not)!


>>
This, I agree, should be an option to the programmer,
that has strong support. However, it should still be an option,
because most people just don't care about warranties,
that never speed up code, and only slow down compilation.
Also, please not that on computers with several levels of
dynamically controlled caches from the ALU to persistent memory
(registers, first, secondary and soon tertiary RAM caches, DRAM, I/O, 
disk),
the performance of a single instruction is meaningless (varies
from half a CPU cycle to several milliseconds needed to fetch data 
through
all these levels of cache).
<<
1. More options, hmm? Mango or litchi-flavoured?
2. Some people are concerned about "warranties" (at least, the
chaps who designed Denver airport are probably a bit more concerned
than they were a few years ago). Consistency of performance is
an animal that I seldom encounter in a computer environment. I
think that many people would agree with this sentiment. The 
question is not whether people care about warranties, but whether
they *should* care about them! You are prepared to squander
vast resources (a la Win 95) in creating your magnificent meta-
programming environment, with little concern for reliability.
Aargh!
3. I am well aware of the performance variation that you
mention (this is WHY I whinge so much about being able to
predict the bloody thing. A single cycle is often meaningless,
but that doesn't mean that you can't sit down and optimally
stuff instructions through your pentium pipes, and get a
fairly reliable prediction of how things will perform. 
What I want to do is make such performance assessment
integral and VISIBLE WHEN YOU NEED IT, CROSS-PLATFORM). 
Even a wide range is better than NO IDEA of performance!

>>
Actually, performance control becomes very interesting in the context
of embedded chips, where timings are much more important, yet simple,
to know and warranty.
   Of course, only high-level symbolic tools, able to do complicated
code manipulations, can be used to "monitor performance" that 
precisely,
and then, you can use them to mostly program in high-level.
<<
So you are quite happy to take eg a 50% performance hit (or more)
in implementing your magnificent metaprogramming monster, but not
(let us say) a 5% hit in achieving a (contextually appropriate)
assessment (or estimate) of performance?


>>
it is possible to develop extensions towards the low-level for Scheme,
so that you could control things down to the kth bit of any register
at some stage of CPU pipelining (now would one want to go that far
is another question).
<<
Woops. Let's get this straight. No "SYMBAL", and now you are
manipulating register bits reliably on a variety of machines?



>> though statistical models may sometimes allow fair monitoring.
Yep. Why not?


>> Are you a specialist of everything ?
Do you think everyone should become a specialist of everything ?
Do you think this is feasible ?
Do you think this is useful ?
[sermon deleted].
<<
I think that we have fundamentally different philosophies.
I doubt whether we will reconcile these! But, for the sheer
hell of it, here's my little sermon:

Yes. I'd certainly like to be a "specialist (or perhaps a
generalist) of everything"! I believe that true progress
often comes from people who sit on the borders of two, or
several fields, rather than people who progressively
submerge themselves in the tiny details of a subfield
of a subspecialty of a sub discipline of..
I do not see computers as tools that will allow us to
isolate ourselves from (to use your metaphor) cooking
or motor car repair, but as agents that will allow us
to integrate and improve our skills, be they sex,
skydiving, computer programming or cooking! (I am not,
I hasten to add, suggesting that such activities be
performed simultaneously!)

I do not see this happening. What I DO see is fragmentation,
with ever-increasing complexity. I see (to paraphrase
your statement):

" technicians [such as those at MS] who arrange things 
so that normal people *need* know as little as possible",
by making things so unnecessarily fucking complex and
multi-layered, that they hardly know what the hell is
going on either. I also see this as a deliberate strategy
to gain control over people and their behaviour (Or
am I just paranoid?)

I see arbitrary "standards" that are designed by committees
of camels, that have no real-life relevance, and that
either go belly-up after a few years, or, more often
become extensively modified with pleats, frills and
ruffs, added by the dressmakers who move in after the
camels have left.

Oh yes. And I see a hell of a lot of lazy people who
are quite prepared to spend more time getting out of
doing a task than they would have spent getting
off their backsides and actually doing it! The "seconds
that technology saves them" are used up in watching
television.

In summary, computers should allow us to gain a
more comprehensive grasp of everything, not just
ONE thing.

[Here endeth the first lesson].

Cheers

JVS.


P.S. I am *horrified* that you, a Frenchman, can
extol the virtues of microwave cooking in preference
to conventional methods. Microwave ovens are great -
for heating things up. Period.