Security (was: a no-kernel system)

Francois-Rene Rideau rideau@clipper
Wed, 28 Dec 94 4:05:56 MET


>>    Raul raised the security problem. I say that the no-kernel concept
>> allows far more security than any kernel.
> 
> It doesn't make sense to say that nothing allows more security than
> something.
   Yes, it does mean something that people are more secure in a democratic
country than in a totalitarian one. In a free country (which does *not*
mean anarchy), people abide by *laws*; in a totalitarian country, people
*obey* to some centralized power. In the first one, they must respect each
other; in the second, they all will try to corrupt power so that it be
kind to them. In the first one, people will voluntarily, knowingly, and
controllingly leave part of their freedom to another one, and be ready to
reclaim it ifever the trustee does not prove worthy. In the second one,
people will have to suffer attacks from bandits organized in mafias, and
will be subject to generalized crashes caused by an agent of the
centralized power being corrupted.
   In totalitarian countries, either things are not done, or they are done
by bandits that grabbed the power (see chmod +s under unix, and direct I/O
port access in MS-DOS and even linux). In free countries, nobody need grab
power to have things done. Everyone will have to require just the power
s/he actually needs, and no more.


> There is no way you can weed out all the crashable programs.  A simple
> example is a program waiting for an event that will never happen.
> Another example is faulty hardware, when a bit in a "safe" program
> becomes corrupt (or you use FDIV to lookup pointer values).  The ONLY way
> around this is hardware isolation of programs and recovery mechanisms.
> This is a necessity if we want persistent systems.

   Of course there is ! That's just what program proof is all about !
Sometimes you can't prove just all you need about a program; but you
may just refuse to run a program that wasn't proven correct with
respect to some crashing criterion.
   For example, if you allow only strongly-typed programs, then you
know that no object faking is allowed. If you ensure local-scoping,
then you know that no object may "pirate" another one to read its
private variables. And those two previous were decidable ! For more
complex programs, you may require a program proof before to run the
program; verifying a proof is also a decidable algorithm. You may
ask for just a proof that the system won't crash, which is much
easier to obtain than a proof that the system will work as actually
expected by the user (unless you added a crash close in case the
system wouldn't work as expected, in which case *you* are responsible).
   As for faulty hardware, no OS will ever correct it. If you made the
system believe that the hardware was such, whereas it wasn't, then *you*
are responsible, not the system. If you trust floating point operations,
then *you* are responsible. Personally, I wouldn't ever accept a program
that relied on floating-point hardware to ensure system consistency; but
if you have exact hardware specifications, and you trust them, nobody
prevents you from doing it; *you*'re responsible. If the hardware does
not work as the software *relies* on, then you can't prevent crashes
(see for example i386/i486 differences; if you trust your using a i486,
and use i486 specific instructions/behavior, then you're responsible for
a crash if actually running on a i386). If you distrust some hardware
feature, then just do not rely on it; easy. Really, you're just over-sizing
the FDIV bug.
   That's always the same motto: do not take decisions for others when
they are responsible. Provide advices, provide software modules, but
let people decide themselves. If you don't they'll choose anyway, and I
hope they won't choose you.

   Now, as for program isolation, I agree that's the only remaining
solution *when no proof is available*. But it means incredible overhead,
that should be avoided when possible. We want fine grain; we want just
*any* object to be a system object; else we know systems will have to
be built on top of ours to manipulate them, so people won't really use
our system that will be only overhead. Do you imagine each integer being
isolated from others, with messages being passed to add integers ? Be
serious. Program isolation is for when a program is not trusted by the
system, e.g. if using a unix/dos/whatever binary emulation, and/or if
some user want the system to admit something that it refuses to believe.
In usual cases, a combination of compile-time and run-time proof-checking
is *far* cheaper and efficient.