OS design...

Francois-Rene Rideau fare@tunes.org
Wed, 21 Oct 1998 03:44:46 +0200


>>>,>: David Jeske
>>: Fare

>>> If I compile a C program to LISP, how I do resolve all external
>>> dependencies without running the code? Run-time errors are the enemy.

>> If your C program uses libraries, either those libraries must have
>> equivalents built into the C->LISP translator, or their source must
>> be available and transformed first.
> 
> Yes, but that dosn't answer the above question. How much static
> analysis can one perform on a lisp block?

Just as much as in the C source.
The fact that the LISP was produced from a static program guarantees
that dynamic constructs won't be used, and that the static analysis
will be successful and accurate.

> To elaborate, imagine I have a C program, and it imports a shared
> library, and I compile this into the LISP machine. Is it possible to
> statically analyze the LISP code and figure out what shared lib this
> program needs?

There is absolutely no reason why this analysis be easier or harder,
more or less useful, than it is in C.
Obviously, you'll have to do something corresponding to C linking stage
as very part of the translation from C to LISP.


>>> Source is fragile. Source, because of it's role in the development
>>> process, is something which can easily fail to compile based on tons
>>> of different interdepdent parts.
>>
>> So what? That's what "stable releases" and stable version branches
>> [are] all about. [...]
>
> My experience with source distribution is that it dosn't handle any of
> these very well.
>
I fail to see how binary distribution helps in any way, here.
"Qui peut le plus peut le moins".
You can always extract a binary from the source!
A big problem is that UNIX (and DOS/Win/Mac) sources
use low-level languages with lots of non-portable dependencies
and quirks due to binary compatibility with proprietary "standards",
so that it's hard to isolate the actual semantics of programs.
Use of a standard portable high-level language solves that problem.


>>> Having software 'fail' is not useful to the user. Thus, distributing
>>> it in something close to a traditional 'source' form is not a good
>>> idea.
>> Source code encourages bug fixes and isn't incompatible with stability.
>> See RedHat Linux vs MS Windows.
>
> I'm not arguing against OpenSource. I'm arguing against using source
> as the 'executable software distribution model'.
>
Well, if you think hard enough, you'll see that there's not much difference,
actually. The difference is in having standard hooks to compile the source;
which is precisely what a package system gives.


>>> Distributing source only works well in the old-school monolithic
>>> software UNIX model.
>> I don't think RedHat Linux is monolithic. Hundreds of RPMs.
>
> On the contrary, I think RedHat Linux is the perfect example of
> monolithic.

Consider every RPM as a module in the system, that you may install or remove.
RPMs are the grain of modularity of a RedHat system. Not quite monolithic.

> Every redhat RPM is hard-coded to work in RedHat's idea of Linux. [...]
> [paths, dependencies, versions].

That's not lack of modularity. That's called *ground* modularity,
as opposed to higher-order modularity. TUNES will have the latter.
Also, the low-levelness of C prevents any semantics-based module model.
TUNES may make it much better, with a high-level language.
For instance, ven when a new version is incompatible with old ones,
we may describe module dependencies in an extensional semantical way,
rather than a purely intensional (tho versioned) way, as under RPM; also,
we may automatically produce *semantic* adapters from one version to another.

> I personally can't get any use out of RPMs, because I refuse to delete
> old versions of an app when installing new apps.
When there's no/little dependency between the set of modules to be upgraded
and the rest of the installed software base, RPM is quite up to the job.
Similarly when upgrading bug-fix versions (with same "major" version number).
Sure don't replace bash 1.x by bash 2.x!!!


> The only tools I've seen in the last 8-10 years
> which have improved the managability of UNIX are: (1) gnu configure,
> (2) encap installation systems. (which RPM and the other package
> people seem to have ignored completely)
I don't know about encap.
One thing that RPM does is provide a uniform way to install software:
you just rpm --recompile *.src.rpm; whereas not all packages use GNU configure,
and even GNU configure behaves (slightly) differently among packages.


>>> This is not something which should be
>>> happening under the end user [...]
>> The end-user uses pre-packaged co-released things.
>> Again, see SLS, Slackware, RedHat, SuSE, Debian, etc.
>
> I don't understand.. above I was refuting your assertion that source
> is a viable distribution model for runnable software. Now you're
> giving examples of packages which distribute binaries. Did I
> misunderstand your original statements? Are you saying that you
> wouldn't choose to distribute runnable software in soruce form?
>
I consider source code as the distribution model for RedHat, SuSE, and Debian
at least: it's a matter of rpm --recompile or its equivalent.
Binaries are just "cached" precompiled versions, given as a convenience;
actually, under Linux, binaries are made necessary
because of the unstability of low-level building tools.
Certainly, even under TUNES, binaries, or even system-images,
may be distributed, too, for the convenience of people installing systems.
But the "standard" way to distribute software should be source code,
like it is under RedHat, though with a finer grain, and high-level semantics.


> I should have said "good VM". Certainly when you go from source to VM
> you compile code from one language to the other. My assertion was that
> two of the qualities which are better in "good VM" than in source for
> distributing software are: (1) robust descriptions of external
> dependencies (i.e. versions, etc) (2) rigidity of the internal
> structure of the program, by nature of it being bound into a single
> image.
>
> I agree with your point above that source, as a distribution form,
> ensures the semantic integrity of the original program.

Then your "good" VM is only some stripped compressed source.
I much prefer a format that allows but doesn't require stripping.
As for describing "external" dependencies, it's again something
to be part of the source. Current technology, when it handles it,
adds it to the source through clumsy version control systems,
using yet another ad-hoc special-purpose language.
TUNES shall add seamless support for versioning in the one unifying language.


>> It's a matter of being able to uniquely identify "external" objects,
>> using some ultimately *global* naming of objects (extensions)
>> and projects (intensions). It's also a matter of being able to track
>> versions/releases of projects, as above.
>
> My point was that in today's "source" model, none of these things are
> done, at all. In fact, they are done extremely poorly even in
> executable targets. However, at least "good VMs" and executable
> formats make an attempt.

That's a problem with the languages in use;
VMs, being only able to remove information, not add it, wrt sources,
are no solution. Advanced languages, unlike C/C++, do have builtin
module systems that are better than a uncontrolled flat global namespace
(CommonLISP, OCAML, etc). Some of them even standardize versioning
(BETA, Dylan). None indeed have a builtin notion of global ID.


## Faré | VN: Уng-Vû Bân   | Join the TUNES project!  http://www.tunes.org/ ##
## FR: François-René Rideau |    TUNES is a Useful, Not Expedient System     ##
## Reflection&Cybernethics  | Project for a Free Reflective Computing System ##
Anyone who goes to a psychiatrist ought to have his head examined.
                -- Samuel Goldwyn