Timecapsule to the past...

Tom Novelli tcn@clarityconnect.com
Wed, 14 Jul 1999 19:15:48 -0400


On Sun, Jul 11, 1999 at 05:34:00PM -0400, Srstanek@aol.com wrote:
> 
> >  1. I would have stuffed a million dollars in the time capsule and sent
> >  it to IBM and told them: DO NOT USE THE X86 CHIP!!!!!!
> 
> Why? The x86 chip is very good. It is crappy programmers that make crappy 
> programs.

Sure, when the 8086 came out in 1977 (79?) being able to address 1 MB was
pretty good.. and 64k segments weren't much of a limitation at the time;
DEC's LSI-11, for example, could only address 64k *total*.. and the
registers were similar to the 8086 (which was probably based on older DEC
machines).  Once you got the hang of it, it was allright.

Then there was the motorola 68000, released in 1980(?).  The IBM PC had been
in development for 3-5 years by then, and it was almost done (released in
1981, I believe).  The 68k was fully 32-bit internally, and it had a nice,
uniform instruction and register set (it was more RISC than most "RISC"
machines).  It was probably pretty expensive in the early 80's, and it was
also overkill.  The first major computer to use it was the Macintosh, in
1984 (remember how expensive they were?).

The 80286 was a half-assed improvement but the 386 put Intel at the same
level as Motorola.. it could compete with the 68030.. it's not as nice but
it's nice enough.  Compared to the other hardware used in IBM PC's and
clones, it rocks.  Right now, lesse.. you can get a 100-mhz 68060, or a 700+
mhz 80x86 or clone, for about the same price I believe.

One more thing...

Contrary to popular opinion, "use more bits" ain't the answer! 
Multi-precision arithmetic is simple to do on any computer.  An 8 bit 6800
can do any math a 64-bit Alpha can do, it'll just take longer.  The Alpha is
just doing more of it in parallel.. but this is a pretty crude way to obtain
parallelism; it's inflexible.  The Alpha allows 48 address bits, so it can
address a 256-terabyte(?) linear memory area, while the 6800 can only
address 256 byte chunks or "pages".. now, there were plenty of 6800 machines
with more than 256 bytes of RAM... you could say they use "multi-precision
addressing".  The 6800 has an advantage: memory references use less memory! 
That's nothing profound.. I'm just saying we need get used to
variable-length references, at least in general-purpose computers.  It'll
need some major redesign of hardware and software, though.

> >  4. No assumptions about the video hardware at all eccept the INT 10
> >  interface should be made. Additional video cards would be possible
> >  because they would simply reconfigure themselves to use the next int
> >  number up. =) ( Goodbye 640 k barrier!)
> 
> The 640k barrier has been gone since ~1984 or so (286), and has been easy to 
> bypass since 1986/1987 (386).
> 
> >  most programs of that era were written in ASM, Today they are written
> >  in C, In the future we can't tell yet but our software must use
> >  interfaces that are generic enough to use them... Instead of a call
> >  based system that uses a C calling convention we should use a generic
> >  message passing system that would give a textual message that would
> >  then be interprited by the server.
> 
> That's incredibly slow. The C calling convention is quite fast and very 
> simple. In fact, to switch over to text would make for very hard-to-make 
> programs. Why switch when all the effects are negative?

Anyone who thinks text messages are a good way to communicate should try a
Linux (*nix?) game called RealTimeBattle.  Sure, lots of TCP/IP programs use
text messages, but there are better ways to bridge the platform barrier.  I
like binary formats, as long as they're not proprietary formats.

-- 
Tom Novelli <tcn@tunes.org>