Security (was: a no-kernel system)

Francois-Rene Rideau rideau@clipper
Tue, 3 Jan 95 18:32:49 MET


> How did we start talking about countries?
   Well, *I* di start that comparison. The darwinist/liberalist idea
of the relation between freedom and selection of the fittest allowed
is valid in any context of a dynamically evolving system with a "fit"
criterion for character survival.


>>> There is no way you can weed out all the crashable programs..
>>    Of course there is ! That's just what program proof is all about !
>> Sometimes you can't prove just all you need about a program; but you
>> may just refuse to run a program that wasn't proven correct with
>> respect to some crashing criterion.
>>    As for faulty hardware, no OS will ever correct it.
> We can reduce the likelyhood of an individual component fault causing a 
> system fault.  See symetric fault tolerant CPU designs, or memory error 
> recovery hardware.  These greatly decrease hardware failures.
   Either you trust individuals, or you don't; or you may "statistically"
trust them; but if you do not trust your hardware at all, there is no point
in using it: a fault may well occur while the OS is executing. No OS I
know of is resiliant to CPU/motherboard hardware fault. If you know such a
thing, please show me. I know "coded processors" that are 50 times
slower and less powerful than usual ones but are resiliant to individual
bit modification due to external causes (cosmic rays et al.), though; but
the people who program them do trust the hardware !
   As for software faults, proving software correctness *does* eliminate
all possible faults (or you just proved the inconsistency of logic !).


>> If you made the
>> system believe that the hardware was such, whereas it wasn't, then *you*
>> are responsible, not the system. If you trust floating point operations,
>> then *you* are responsible.
> 
> What are you trying to say?  You must agree _nothing_ is perfect.  Every 
> computer will fault eventually, every one!  We must be able to recognize 
> this.
   I'm trying to say that if one does not trust one's "add register values"
instruction anymore, then one should just throw out one's computer, and buy
a brand new one; and if one doesn't trust anyone at all throughout the world,
then one oughta be in a psychiatric hospital, or commit suicide.
   Computers are *finite* machines, so yes, we can expect perfection from
their finite combinatorials. As for cosmic rays and such, no OS will ever
eliminate them (though I know of no problem that could be blamed upon them);
while if you want some cosmic-ray resiliant program, then you should use a
cosmic-ray resiliant processor.


> And do what we can to deal with this.  On an old PC, you toggled 
> the power switch.  Can we do better?  Than what will it be?
   Toggling power is not a software fault, and won't cause erroneous
software behaviour. Recovery from power-offs is the very basics of a
persistent system; an OS like Grasshopper already achieves this, though
with a page-level grain.


> Please step into reality here.
   I'm firmly into reality, and unless you make me doubt of logic, I'll
stay confident in correctness proof as the best way of achieving complete
system security with development-time expense, while program isolation will
only offer little security at great run-time (and maintain-time) expense.


>>    Now, as for program isolation, I agree that's the only remaining
>> solution *when no proof is available*. But it means incredible overhead,
>> that should be avoided when possible.
> 
> Please illuminate me on this incredible overhead.  My little 8086 
> programs can hum along in protected mode on my 486.
   Just see how each single system call is treated, with first a switch to
protected mode, then back to DOS real-mode, (or Linux user-mode) to complete
the call, then back to V86 mode through ring-0 monitor, each time a fault
appears. Small programs that do not use hardware capabilities are not much
slowed; but they could very well be compiled in protected-mode, and presumably
using a safe compiler that'd produce safe code. And if you really use
heavy hardware I/O, you'll see that any truely protective and fully emulating
system will dramatically *slow* everything (see the Linux DOS emulator), so
yes, there is an incredible overhead.
   As for dynamically self-modifying code, I already admitted that if such
code could not be proven correct, yes, program isolation would be needed
(though I wouldn't use V86 mode for that).

   But even for normal protected mode operations, a system like linux
introduces monstruous space and time overhead, because the virtual
address mapping is completely changed at each context switch, thus disabling
any cache technique, and refilling it a 8 MHz. Even if not switching tasks,
interrupt management involves *slow* priority switching.
   And access-rights management in a kernel-based system is also some
incredible overhead, while user-level kernel extensions involve considerable
number of context switches each time a call-back is needed (see a user-level
virtual memory unit, that calls sys_mmap in a signal handler -- ugh).