Proposals
Jason Marshall
jason@george.localnet
Wed, 28 Jun 2000 16:16:52 -0700 (PDT)
> David Hilvert wrote:"I must say that I am somewhat
> skeptical of the practicality of generating all
> possible implementations of a specification.
> Perhaps you could clarify what you mean or go into
> more detail. I may have misread what you wrote."
>
> Apparently skeptics abound.<g> While I think I
> have addressed this in another response, let me
> restate it here. The only time that this occurs is
> when translating from the lowest (raw material)
> level of the machine-independent specifications
> into their machine-dependent ones: machine code
> generation.
What you are describing is known as a 'brute force' method. All
combinations are tested, looking for the optimal (or sometimes,
only) solution. Perhaps you are familliar with Distributed.net?
Using computing power equivalent to 60,000 reasonably new
desktop machines, they are brute force testing a piece of
information only 8 bytes long, against another piece of information
only a couple of kilobytes long, looking for the single solution
that is the correct one.
They've been running their calculation for 32 months.
> In truth that's what occurs in every compiler and
> interpreter today except that their generated code
> comes from a discovery process outside the system.
No, it isn't. Read on.
> The point is that their discovery process is not
> exhaustive (even though for some in the profession
> it has evolved over quite a period of years). What
> they discover then is more related to chance, to
> art, than to science.
There's a reason why they do this. They use rules of thumb,
otherwise known as heuristics. For every good decision these
rules of thum eliminate, they eliminate thousands of bad
decisions. They even only apply these heuristics to small
blocks of code, and still it takes that long. Since the blocks
are small, any gross algorithmic efficiency errors made by
the application programmer will merely be spackled over, not
removed entirely.
Are you familiar with the concepts surrounding the calculation
of Order of Complexity of an algorithm? It is my suspicion
that you do not, and that this is the primary source of the
difficulty in getting you onto the same page with the folks
who have challenged your statements.
> IBM, for example, has its compilers (C/C++, Cobol,
> PL/I) translate the source into a common
> "intermediate", machine-independent code.
It is my understanding that all virtually all modern compilers
perform this transformation.
> That they don't have an exhaustive true/false
> process accounts for changes that do occur in their
> optimizing choices over time.
Some bad heuristics are replaced with better ones, and
vice versa. *nods*
> The question I would
> raise is "Why does the current system which is so
> expensive in terms of money and time regarded as
> "practical?".
Because the order of complexity of the alternative makes
people quake in their boots, laugh nervously, or roll around
on the floor giggling insanely and pointing in your general
direction? It's a mindbogglingly large calculation.
> It will take science a while to accept that the
> world is made of objects and not processes.
I thought the particle/wave debate died years
ago, with the dualists winning?
Have fun,
Jason Marshall