CORE CODE!
Raul Deluth Miller
rockwell@nova.umd.edu
Mon, 19 Dec 1994 12:41:17 -0500
J. Van Sckalkwyk (on memory regions):
The basic idea is that if you have a "variable" (eg fred) it has an
attached "value" or "meaning". The value is a 32 bit number that
can be _any_ one of the primitive data types. All references to
"fred" in our interpreted language are simply pointers to the
location in memory that contains this value.
Run time polymorphism is a very useful productivity tool, however it
has a non-trivial execution time speed penalty and I don't think we
want to force all applications to incur this kind of penalty.
> Moreover, I'd tend to prefer an independent stack for each primitive
> data type, and in addition to basic instructions (dup, add, ...) for
> each stack, you need a complete set of movement instructions (e.g.
> copy from integer stack to fp stack, copy from program stream to
> character stack (do we want to bother with chracter stack?)), as well
> as a set of instructions for manipulating arbitrary data structures
> which mix data types (e.g. "memory").
This may have merit, but sounds awfully complex to me. Do you think
that your approach (outlined above) will have major performance
benefits?
Yes. :-)
Seriously though, I think that a "stack" is a good alternative to a
"bank of registers". Today, I'm thinking along the lines of the Forth
model (two integer stacks, and an fp stack), but I can't seem to get
away from the fact that for certain kinds of data manipulation it's
really nice to have bit-level control over a large region of memory
(the RS-6000 instruction set seems to have some of the facilities I
think would be desirable but I'm still struggling to see the larger
picture).
And then there's graphics. Graphics applications have not yet
achieved "representation stability" -- it seems a bit hard mapping
between the presentation capabilities of graphics hardware and the
sorts of general purpose computation that are represented in the
advancing state of the art for graphics computation. It's a trivial
problem for the case of 1 hardware platform with a handful of graphics
adaptors.
Graphics issues seem to be:
Color Selection (a very hard problem since long before computer
graphics. Pantone survives by selling references for color
selection. Physics gives us some reproducible means for selecting
colors, but these aren't all that available as general purpose
computer hardware.
Display geometry. We reduce the complexity here if we have a
"rectangular" pixel map (as opposed to vector-graphics or some other
such alternative). There's still the matter of x-scale, y-scale, and
the mapping between memory addresses and pixel position. This can
interact with color selection if, for example, you have 4 bit pixels
or some other configuration where there's not a one-to-one
correspondence between pixels and memory addresses.
Timing. Real time graphics is older than the personal computer but is
still bleeding edge. Basically, race conditions and differential
updates seem to be hard for people to grapple with.
It's easy to design a system which doesn't communicate with any
interesting hardware. It's easy to design a system which interacts
with one specific instance interesting hardware. It's tough to design
an efficient system for interacting with many kinds of interesting
hardware. Which is why we invent computer programming languages -- to
allow us to map general solutions onto a variety of hardware.
--
Raul D. Miller N=:((*/pq)&|)@ NB. public e, y, n=:*/pq
<rockwell@nova.umd.edu> P=:*N/@:# NB. */-.,e e.&factors t=:*/<:pq
1=t|e*d NB. (,-:<:)pq is four large primes, e medium
x-:d P,:y=:e P,:x NB. (d P,:y)-:D P*:N^:(i.#D)y [. D=:|.@#.d