Massimo Dentico m.dentico@teseo.it
Sat, 11 Mar 2000 14:55:56 +0100

jan coombs wrote:
> Massimo Dentico wrote:
> >
> > cLIeNUX user wrote:
> > >
> > > Paralyn@mediaone.net...
> > > >
> > > >cLIeNUX user <r@your_host.com> wrote
> > > >> My understanding is that virtual address spaces for processes are
> > > >> typically provided by an MMU, and must be done in hardware
> > > >
> > > >For performance reasons, it usually is.  For protection of machine language
> > > >programs, I can't think of a good way to do it in software.
> > Sorry,  Rick, but I disagree completely. Hardware protection  is  not
> > cheap.
> > It's expensive in terms of:
> > 1) chip design,
> > 2)   hardware  resources (transistors), that  otherwise   one   could
> > utilize for other functions,
> > 3)    overhead    at   run-time  (somewhat  large,  depending    from
> > hardware/software combination).
> The hardware is expensive for full functionality: an adder
> the width of the address bus to relocate addresses and a
> similar sized comparators to check the range of addresses
> being mapped. What I'd like to know is whether a MMU with
> minimal functionality would be usefull, for example:
> Mapping window size option is restricted to a simple log
> series, for example 2,3,4,6,8... Windows are aligned on
> natural boundaries for both true and buffer addresses. Under
> these restrictiond, the adder for relocation is not needed,
> and much of the comparator logic is not required.
> Would this make a useful compromise?

Probably, yes.  I'm not an hardware expert,  consequently I base my 3
assumptions on what is well-known about the hardware. A useful answer
imply a detailed analysis  of the interactions  between this hardware
architecture  and  software,  but also a comparison  with traditional
architectures. Beyond my actual knowledges about the hardware.

However,  my conjecture is that moving from hardware based protection
to software (proofs)  based saftey  you gain big advantages;  this in
the general context  where  more and more  features  have moved  from
hardware to software (RISC, MISC, programmable logic like FPGAs).

Let me to cite myself:

> > They [formal methods] doesn't set a run-time overhead and they
> > could assure a higher safety level (**in theory, whatever software
> > property that could be proven**).

[I have added the emphasis now].  The big challenge  is  to turn  the
theory in common practice. Proof-Carrying Code (PPC) is a step toward
this goal.  Interested persons could have a look at the work of Peter
Lee and George C. Necula,  for example the thesis  of Necula with the
meaningful title "Compiling with proofs":

- http://raw.cs.berkeley.edu/Thesis/thesis.pdf
- http://raw.cs.berkeley.edu/Thesis/thesis.ps

or other related papers:

- http://www-nt.cs.berkeley.edu/home/necula/public_html/papers.html

In each case, I don't disdain your approach: on the contrary, simple,
but specialized,  features in hardware, if well selected,  could make
the difference  in terms  of performances without require complicated
and expensive hardware (general purpose processors vs specialized co-
processors).I think that this form of specialization is complementary
to the simplification of the hardware (moving features  from hardware
to software).

Massimo Dentico