JVS & Fare blather on..

Francois-Rene Rideau rideau@ens.fr
Mon, 8 Apr 1996 17:14:05 +0200 (MET DST)


>> By writing "MOV AX,1", you preclude such further optimization.
> No. It merely gives us a common level on which to work. For
> example, some processors allow a test bit & reset, others
> don't. Does this mean that we must have three SYMBAL instructions
> (TESTBIT, RESETBIT, TEST&RESET), or never use the last one?
> Of course not. The SYMBAL coding on any machine is e.g.
>         TEST  foo,1     (or whatever)
>         RESET foo,1
> The device-dependent level below this implements this any
> way it wants : the important thing is that it gives us a
> lower level need not implement things on a 1:1 basis!
> (It might, initially).
   Then what does this bring that low-level primitives for a HLL
wouldn't ?
   Register architectures vary so much from machine to machine:
there are varying number of registers, register windows or banks,
specialized registers, zero, one, two, or three register instructions
depending on the cpu and the instruction, sometimes several cpu modes
about that, sometimes only a stack architecture, and given the
instruction set, instruction scheduling and timing may be completely
different for various cpu models.
   You just can't do it right in a *portable* way,
lest you really don't specify much, and let the language
adapt to the available instruction set,
choose the exact registerwise behavior,
and schedule instructions properly,
in which case you have exactly what I always proposed,
that is, low-level primitives in a high-level language.
And once you have these, you'll sure get much better speed improvements
by writing better algorithms and meta-algorithms (aka compiler tactics)
than losing endless hours at manually tweaking performance.

   Of course, the development environment may have progressive refinements
toward the low-level, beginning with specialization toward
CPUs with n-reg ALU instructions for n=0..3,
then for particular CPU families like the SPARC, MIPS, Alpha, PPC, 68K, x86,
then for models like the v7, v8, SuperSPARC, etc.
   Yes, there *should* eventually be such refinements,
but these just *can't* be portable !
Portability isn't for low-level. Believers of that may many things
completely different, from C to the Java VM to the TAOS VM;
but this is altogether a delusional dream.
The Low-level should strive for efficiency, and that alone.

>> [I assume "monitoring" means being able to tweak and modify things,
>> not just looking at how long its takes for the program to run].
> No. Not immediately.
   Then monitoring is altogether useless.
Information exists only when there's feedback.
If you can't tweak back from monitor output,
there's nothing you gain above simply running the unmonitored program,
but time and nerve bandwidth lost.

> I want
> (1) To be able to monitor and within limits predict performance
> (in the final analysis: "is it worth running this program on
> this machine, or should I just go and have a nice refreshing
> cup of tea"), and
   Current profilers already do much more than that.

> (2) Yes, it would be nice to be able to say "this strategy just
> isn't efficient on this machine, let's try the following.." But
> this is a secondary goal.
   This is the one and only interest (which I consider important)
to performance monitoring.

>> Here is the next computing revolution.
>> The internet is the medium of fast (humanwise) world-wide 
>> communication.
>> So it vehiculates[?] information, not noise,
>> which is what consistency means,
>> we *need* such metaprogramming environment.
> Hmm. Every time I potter around the internet I cringe. Many
> participants can't even spell, let alone communicate rational
> thought. If this is the next revolution, include me out. My
> "bricks of shit" metaphor is, I think, appropriate.
   If you believe that the Internet achieves only
what its worst member achieve, then you are
"putting your finger in your own eye" as we say in France.
The Galilei, Descartes, Newton, Darwin, etc,
always have been a small portion of population.
Yet it is the ability to communicate better and faster
that made them able to build from each other's work,
instead of beginning everything from scratch everytime,
like our distant ancestors had to do.

>>> If we intend to produce a system which works and is stable in a
>>> variety of different environments, we need some common ground from
>>> which to move.
>> This "ground", to be, can only be the HLL. Because people
>> [snip, snip]
>>> Now you might argue that one could simply use e.g. C (or Scheme, or
>>> whatever) as our SYMBAL.
>> I won't. Firstly, C is a stupid language to me,
>> Scheme an exclusively high-level one.

> Whoopsie. Do I detect a contradiction here?
   No, you just misunderstand.

> Do you agree that there must be some common level
> or are we back to the same unresolved argument of a year ago?
   There must be common levels, ways to communicate, etc.
That's precisely what LLL and HLL compiler development are about.

> If the common level is e.g. Scheme,
   Here is your misunderstanding. I was precisely saying that Scheme
couldn't be the common way to do low-level programming,
because it is, as exists and is standardized,
an exclusively high-level programming language
(surely I should have abstract instead of HL).
Which doesn't mean it couldn't be extended with proper
low-level primitives.

> then you presumably have to
> create a virtual machine on EACH system that
> (a) uses Scheme as its "native" language;
> (b) can perform ANY & EVERY O/S task in the context of (a);
   Of course I'll have to (even if it's not Scheme) anyway.
This is exactly what "porting" means.

> (c) Needs to be re-developed from scratch for every new
> machine that comes along. I do not know much about Scheme
> or its primitives, but I bet you are going to shit if you
> try this one on!
   And why should it be done from scratch ???
Sure the more heterogeneous the platforms
(e.g. Alpha vs x86 vs cluster of P32 chips),
the more to be rewritten.
But everything that was written can be reused
in proportion with its having been written in a high-level way.
Of course, the low-level optimized versions cannot be used,
but they couldn't have been anyway.
Let's have portable high-level versions be portable,
and efficient low-level versions be efficient,
and not try to make one the other !

>>> completely reproducible performance regardless of the 
>>> system, **with the caveat** ..
>> You may enforce performance *warranties*
>> (subject to hardware failure),
>> that will be valid assuming the according resources
>> can be reserved to you,
>> but don't expect "reproducible performance".
> Perhaps I phrased that badly. I am not so consummately naive as
> to believe that a 6502 will perform like a Pentium (hence the
> caveat). What I meant was that the two virtual machines will be
> functionally identical (do the same thing) but not (of course)
> necessarily in the same time! That's why I want the system to
> give me some indication of how long things will take! I also
> want reasonable certainty that the bloody program will run in
> the available memory.. (or not)!
   "Give some indication of how long things will take"
means that all your VM implementations will follow the same model.
But you can't expect that to be:
a cluster of MISC CPUs with small memories each
just can't emulate efficiently a Von Neumann machine
with large linear memory,
but if you compile specifically for it,
it can be hundred times faster.
I already said what I thought about VMs:
nice for portability, worth nothing for performance;
trying to base low-level programming on a VM is precisely what hampers
further progress in computer hardware, particularly parallelism.
   The system shouldn't give much performance warranties in any
portable way. However, by specifically limiting the scope of
machines on the which you run, you can indeed try get some performance
warranties. But this needn't deep OS support, just the ability for the
OS to consistently attach such information to objects,
which is precisely the role of the HLL.

>> This, I agree, should be an option to the programmer,
>> that has strong support.
I repeat: Strong support, not deep builtin specific OS support.

>> However, it should still be an option,
>> because most people just don't care about warranties,
>> that never speed up code, and only slow down compilation.
>> Also, please not that on computers with several levels of
>> dynamically controlled caches from the ALU to persistent memory
>> (registers, first, secondary and soon tertiary RAM caches, DRAM, I/O,
>> disk),
>> the performance of a single instruction is meaningless (varies
>> from half a CPU cycle to several milliseconds needed to fetch data
>> through
>> all these levels of cache).

> 1. More options, hmm? Mango or litchi-flavoured?
If that's the only options you know...

> 2. Some people are concerned about "warranties" (at least, the
> chaps who designed Denver airport are probably a bit more concerned
> than they were a few years ago). Consistency of performance is
> an animal that I seldom encounter in a computer environment.
> I think that many people would agree with this sentiment.
This is a shame indeed. But specific OS support is not what's needed,
all the less in a pseudo-portable way.

> The question is not whether people care about warranties,
> but whether they *should* care about them!
   To most people, working in an interactive environment,
the computer can give is to actually do the job;
they just don't care about it promising it'll be finished soon,
because however long it will take, they'll have to wait,
so it'd better not lose time at computing how long it will take,
but use that time to actually do things faster.
   Those who care about warranties are those
who have non-interactive computers, which is very important indeed,
but not everybody everywhere everytime.

> You are prepared to squander
> vast resources (a la Win 95) in creating your magnificent meta-
> programming environment, with little concern for reliability.
> Aargh!
   If you believe that, you mustn't understand much about this project.

> 3. I am well aware of the performance variation that you
> mention (this is WHY I whinge so much about being able to
> predict the bloody thing. A single cycle is often meaningless,
> but that doesn't mean that you can't sit down and optimally
> stuff instructions through your pentium pipes, and get a
> fairly reliable prediction of how things will perform.
> What I want to do is make such performance assessment
> Even a wide range is better than NO IDEA of performance!
   I have always completely agreed to that,
(and been the only one to express agreement, btw),
so why do you shout against me !?
   My point is that again, portable performance
can all be done and warrantied much better in HLL,
and that LLL monitoring and tweaking is useful
only when you're ready to make *non-portable* assumptions.

>> Actually, performance control becomes very interesting in the context
>> of embedded chips, where timings are much more important, yet simple,
>> to know and warranty.
>>    Of course, only high-level symbolic tools, able to do complicated
>> code manipulations, can be used to "monitor performance" that
>> precisely,
>> and then, you can use them to mostly program in high-level.

> So you are quite happy to take eg a 50% performance hit (or more)
> in implementing your magnificent metaprogramming monster, but not
> (let us say) a 5% hit in achieving a (contextually appropriate)
> assessment (or estimate) of performance?
   I'm not *happy* to make *any* performance hit.
And I have always said how
* I believe the metaprogramming "monster" is pure performance *gain*,
for interactive use
* The system should be able to strip itself,
so that nothing in the OS would be a required monolithic part,
certainly not the full metaprogramming environment.
* performance assessment should be *possible*,
but not required, as even those 5% are good not to lose,
* it is a VM architecture that would be a 50% performance hit
(99% on parallel architectures),
* that "portable performance warranties" is mostly a dream,
and that only an unconstrained HLL, not a VM,
could bring something resembling it.

>> it is possible to develop extensions towards the low-level for Scheme,
>> so that you could control things down to the kth bit of any register
>> at some stage of CPU pipelining (now would one want to go that far
>> is another question).
> Woops. Let's get this straight. No "SYMBAL", and now you are
> manipulating register bits reliably on a variety of machines?
   Those extensions would precisely be kind of SYMBAL.
Just that they are not a globally valid VM model for the system,
but progressive refinements toward specific assemblers.
Using a portable HLL doesn't mean you can't make unportable assumptions.
It means you're free to control exactly how (un)portable you will be.

>> though statistical models may sometimes allow fair monitoring.
> Yep. Why not?
I never said not. I was precisely proposing.
Those models are something particularly high-level,
and an instructionwise VM won't bring anything to it.

>> Are you a specialist of everything ?
>> Do you think everyone should become a specialist of everything ?
>> Do you think this is feasible ?
>> Do you think this is useful ?
>> [sermon deleted].

> I think that we have fundamentally different philosophies.
> I doubt whether we will reconcile these! But, for the sheer
> hell of it, here's my little sermon:
> Yes. I'd certainly like to be a "specialist (or perhaps a
> generalist) of everything"!
Yes, and I'd like to be God, worshipped by all humanity.
But it's altogether meaningless, of course.

> I believe that true progress
> often comes from people who sit on the borders of two, or
> several fields, rather than people who progressively
> submerge themselves in the tiny details of a subfield
> of a subspecialty of a sub discipline of..
Sure, inspiration comes only to open minds.
Still, progress is synonimous with specialization.
The individual human mind and life is limited,
uncomparably more than total human knowledge,
so that you can't progress without specializing,
and being relieved from lower tasks by technology.

> I do not see computers as tools that will allow us to
> isolate ourselves from (to use your metaphor) cooking
> or motor car repair,
Tools do not *isolate* anyone from the other !!!
All on the contrary, they are the way by which people help each other.
In prehistoric times when each one had to build one's own tools,
and find one's own food, then were people isolated from each other.

> but as agents that will allow us
> to integrate and improve our skills, be they sex,
> skydiving, computer programming or cooking! (I am not,
> I hasten to add, suggesting that such activities be
> performed simultaneously!)
It's precisely because it relieves us from lower tasks,
that technology allows us to improve our skills.

> I do not see this happening. What I DO see is fragmentation,
> with ever-increasing complexity.
Sure you see the increasing noise,
but you don't see the increasing information below,
which is what really counts.
I'm sure am not working FOR noise,
but I'm not in a crusade against it either;
I'm FOR information.

> I see (to paraphrase your statement):
> " technicians [such as those at MS] who arrange things 
> so that normal people *need* know as little as possible",
Who are you trying to fool with this MS comparison ?
I never was an admirer of those technicians and all tunesers know it.
I was talking about the role of a technician,
and that there be ones that work against their duty
does not prove anything.
And even those MS guys provided the world with easy-to-use
WYSIWYG editors, databases, etc
(though at the cost of better and cheaper software),
So even if MS is a harmful company,
it is part of a useful general progress of technology.

> by making things so unnecessarily fucking complex and
> multi-layered, that they hardly know what the hell is
> going on either. I also see this as a deliberate strategy
> to gain control over people and their behaviour (Or
> am I just paranoid?)
If you think that computers are worse than they were twenty years ago,
feel free to use a PDP-11.
As for me, I just ignore whatever useless hardware/software
complex stuff there is, and consider it as additional overhead
to use better, faster computerware.
Surely my vaccum cleaner is sometimes uselessly complex,
but if I ignore the useless options, it still is much much better
than any broomstick.

> I see arbitrary "standards" that are designed by committees
> of camels, that have no real-life relevance, and that
> either go belly-up after a few years, or, more often
> become extensively modified with pleats, frills and
> ruffs, added by the dressmakers who move in after the
> camels have left.
Sure, the world is not perfect. So what ?
Will you, like Robespierre, or Pol Pot,
masskill all unvirtuous people to make a new mankind ?
Just ignore noise, and contribute information.

> Oh yes. And I see a hell of a lot of lazy people who
> are quite prepared to spend more time getting out of
> doing a task than they would have spent getting
> off their backsides and actually doing it! The "seconds
> that technology saves them" are used up in watching
> television.
   If you despise technology, I invite you to live naked
and without tool, and try to survive for a few days.
Surely just living without electricity, or just without
electrical light, will be enough (special kudos to Thomas Edison).
Why not just *buy* a candle and light it ?
Then why not *make* the candle ?
Why not go extract the parafin and other basic materials yourself ?
It's so much easier, better, saner, to be thus "natural",
than to use technology (which, as everyone knows, takes it
laws and materials outside of nature, with help of black magic).

> In summary, computers should allow us to gain a
> more comprehensive grasp of everything, not just
> ONE thing.
That's precisely what I'm saying.
Don't force everyone to learn computer techniques,
but have it a tool to help them in whatever activities they have.

> [Here endeth the first lesson].
Please the second !

> P.S. I am *horrified* that you, a Frenchman, can
> extol the virtues of microwave cooking in preference
> to conventional methods. Microwave ovens are great -
> for heating things up. Period.
Why "in preference to" ?
I love good cooking, and know it from a long experience,
still, I reckon the extraordinary virtues of microwave ovens
as a very practical way to heat liquids and old dishes,
or occasionally frozen food, when time lacks to prepare better food.
Time is lifebits, and good food is only *one way* to fill them.

--    ,        	                                ,           _ v    ~  ^  --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
--                                      '                   / .          --
Join the TUNES project for a computing system based on computing freedom !
		   TUNES is a Useful, Not Expedient System
WWW page at URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"