Tue, 20 Oct 1998 17:29:13 -0700
On Wed, Oct 21, 1998 at 12:54:33AM +0200, Francois-Rene Rideau wrote:
> >: David Jeske
> > If I compile a C program to LISP, how I do resolve all external
> > dependencies without running the code? Run-time errors are the enemy.
> What the hell do you mean?
> If your C program uses libraries, either those libraries must have
> equivalents built into the C->LISP translator, or their source must
> be available and transformed first.
Yes, but that dosn't answer the above question. How much static
analysis can one perform on a lisp block?
To elaborate, imagine I have a C program, and it imports a shared
library, and I compile this into the LISP machine. Is it possible to
statically analyze the LISP code and figure out what shared lib this
> > Source is fragile. Source, because of it's role in the development
> > process, is something which can easily fail to compile based on tons
> > of different interdepdent parts.
> So what? That's what "stable releases" and stable version branches all about.
> See Linux version numbering, CVS branches, etc.
My experience with source distribution is that it dosn't handle any of
these very well.
> > Having software 'fail' is not useful to the user. Thus, distributing
> > it in something close to a traditional 'source' form is not a good
> > idea.
> Source code encourages bug fixes and isn't incompatible with stability.
> See RedHat Linux vs MS Windows.
I'm not arguing against OpenSource. I'm arguing against using source
as the 'executable software distribution model'.
> > Distributing source only works well in the old-school monolithic
> > software UNIX model.
> I don't think RedHat Linux is monolithic. Hundreds of RPMs.
On the contrary, I think RedHat Linux is the perfect example of
monolithic. Every redhat RPM is hard-coded to work in RedHat's idea of
Linux. This isn't RedHat's fault, it's Linux's fault. However, the
fact remains that redhat packages are basically designed to build one
big monolithic 'castle' where everything is fragile and connected
together by convention.
I personally can't get any use out of RPMs, because I refuse to delete
old versions of an app when installing new apps. Because of UNIX's
broken reliance on install location and physical paths, RPMs are
almost never relocatable. Thus, I have to install all '3rd party'
software from source. I throw a RedHat CD in, let it install it's
monolithic castle, and then when I need tools, I have to install them
from source in a modular way... However, they are just sitting on top
of the RedHat monolithic monster.
> > As software becomes more modular, it becomes very
> > difficult to get the right versions of all the libraries necessary to
> > compile a package from source.
> Which is what RPM is all about (also, make). Certainly it's not perfect.
> I hope we can do better in a fully reflective system, where we can develop
> system-wide consistent tools instead of multi-level kluges
> (rpm, make, configure, CVS, etc).
We can certainly do better, because in my experience RPM dosn't solve
any of my problems. The only tools I've seen in the last 8-10 years
which have improved the managability of UNIX are: (1) gnu configure,
(2) encap installation systems. (which RPM and the other package
people seem to have ignored completely)
> > This is not something which should be
> > happening under the end user [...]
> The end-user uses pre-packaged co-released things.
> Again, see SLS, Slackware, RedHat, SuSE, Debian, etc.
I don't understand.. above I was refuting your assertion that source
is a viable distribution model for runnable software. Now you're
giving examples of packages which distribute binaries. Did I
misunderstand your original statements? Are you saying that you
wouldn't choose to distribute runnable software in soruce form?
> > Those are (for me) the distinguishing facets which make a "VM"
> > different from "source".
> Uh? From source to a VM, you compile code from one language to the other,
> losing some of the original specification (semantics or lack thereof),
> and adapting to a given target. The lowest-level the VM, the more information
> is lost and added in the process. Distributing source ensures the semantic
> integrity of the original program, and prevents semantic noise from being
> introduced by the intermediate VM.
I should have said "good VM". Certainly when you go from source to VM
you compile code from one language to the other. My assertion was that
two of the qualities which are better in "good VM" than in source for
distributing software are: (1) robust descriptions of external
dependencies (i.e. versions, etc) (2) rigidity of the internal
structure of the program, by nature of it being bound into a single
I agree with your point above that source, as a distribution form,
ensures the semantic integrity of the original program.
> > For example, JavaVM is practically source form already,
> > you can run a simple 'decompiler' over it and have the source back.
> JVM sucks. It's a very bad way to transport source; plus it introduces
> lots of noise, and hypes lots of people into being even dumber.
> If you want to transport source-code, go to the real thing.
> See Juice technology.
I've seen Juice... and I wasn't hearalding JavaVM as an appropriate
technology. I was merely providing an example of a VM which preserves
the semantic integrity of the original program.. Juice is another good
> > The difference is that the VM is specified in a rigid
> > form which has already been resolved against the externalities you
> > wrote into the source code. Those externalities are explicitly listed
> > in the VM representation. I don't think the JavaVM is good enough at
> > this, because I should be able to uniquely resolve a dependency with a
> > better method than just a "class name".
> What difference does that make? It's not a VM vs source issue at all.
> It's a matter of being able to uniquely identify "external" objects,
> using some ultimately *global* naming of objects (extensions)
> and projects (intensions). It's also a matter of being able to track
> versions/releases of projects, as above.
My point was that in today's "source" model, none of these things are
done, at all. In fact, they are done extremely poorly even in
executable targets. However, at least "good VMs" and executable
formats make an attempt.
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + firstname.lastname@example.org