An approach for implementing TUNES
June Kerby
magister@ns4.tecinfo.net
Sat, 10 Apr 1999 21:05:38 -0500 (CDT)
> > Hmm... As I said before, specifying a PC, although far from trivial, seems
> > fairly practical to me.
> Well, you have convincing arguments that it is not only possible
> but actually feasible. The difficult part is now to find the proof system
> infrastructure in which to do it, that we can later integrate to Tunes...
Yes... would it be possible to do it in Coq or ACL2? (as you probably already
know), I have not used either of these.
>
> > I guess I've actually been talking about a subset of a
> > PC specification the whole time... We don't need specification of those
> > model-dependent-registers or whatever they're called on the new processors,
> > or any other freaky features that you can find on most hard disks, video
> > cards, etc..
> The problem is, you might have to, for some purposes, though not for others.
> For instance, proving the correctness of the persistent store requires
> a model of the hard-disk system (well specified with SCSI; not so with IDE).
It certainly does, but that does not mean we have to specify useless
extensions to the IDE standard. We only have to specify enough so that a
working TUNES system with a persistent store can be found by the logic system.
> Proving the correctness of drawing routines requires a model of the video
> (beware that things are often not exactly the same when done by the video
> hardware, and that discrepancies should "leak" into a security breach
> when misspecifying what happens with video or other inexact I/O).
That is true... If we specify that operation FOO only puts a pixel on the
screen, when it actually makes the monitor blow up, then anyone who the
security privilege to put a pixel on the screen will be able to perform
operation FOO and thus make the monitor blow up. However, fortunately, I
don't think many operations like FOO exist (operations that normally do
innocent things, but on some incompatible varients cause destructive activity)
The safe thing would be to only specify operations that pretty much work
the same on all machines. In the last example, had we simply not specified
FOO at all, then it could not have been proven that FOO was within the user's
security priviledges (unless of course he was a "root"-like user who was
permitted to do anything), and so he could not have performed it.
>
> > Wouldn't it be possible for a proof system to _generate_ a
> > compiler using the definition of the language and the definition of a PC
> > subset?
> Yes and no. There is no canonical compiler, even less a canonical way
> to get an *efficient* "optimizing" compiler; however, there are ways
> to specify compiler-generating "tactics" that are known to be correct
> independently from the parameters chosen to "tweak" the compiler efficiency.
> I believe the Proteus project at UNC had this in mind.
Hmm... I'm going to have to take a look at that...
>
> > I don't quite understand what you mean by a "level of abstraction".
> > Do you mean at various levels of detail?
> Yes, but by saying "abstraction level", I imply some consistent
> structuration among the properties ("details") that you consider
> at any given moment.
I get what you're saying... In the last example, the physical-particle
level was one level of abstraction, while the instruction-sequence was
another (higher)...
>
> > It does seem like generalizing (increasing the
> > level of abstraction?) would make proof easier.
> Every proof is made easiest by choosing the highest level of abstraction
> that fits. Actually, the interesting theorem is often the one that proves
> the correctness of the change in point of view (or successive changes)
> from the one in which the problem is stated until the one in which it
> becomes trivial.
I think i kinda get what you're saying. I'd like to see an example of such a
proof.
>
> > It is easier to reason
> > about things when you have less detail to worry about; so it is best to
> > find a pattern in the detail and describe in simpler but more abstract
> > terms. Is that what you mean? I don't understand how that's the same as
> > reflection though (self-modification).
> That's exactly what I mean, and precisely what reflection is all about:
> chosing the right abstraction level at the right moment,
> by fully deconstructing (reifying the semantics of) an object
> to interface it with low-level metaprograms when details are needed,
> and by quotienting it (forgetting the irrelevant) when details
> are no more needed.
>
Again, I think i might understand what you're saying, but a simple example
would be nice.
> >> So in practice, it looks like we'll have to begin with empty
> >> specifications and lots of axioms, and slowly specify things.
> > Yes, I guess that is the equivalent of writing TUNES in assembly or C,
> > making lots of vague assumptions (axioms).
> Exactly. Of course, when we begin massively specifying things,
> we may have to completely reimplement "dirty" parts of the system
> so as to be able to achieve provable code.
>
Yes, we would. But I am still wondering if we can escape writing any
"dirty" code at all, by finding or writing a logic system to find a TUNES,
giving the TUNES definition, and the definition of our target.
Thanks again for your comments...
- Brent Kerby ("Iepos")