Protocols (was: Oops)

Fare Rideau rideau@nef.ens.fr
Sun, 17 Aug 1997 22:55:51 +0200 (MET DST)


>>>: Fare
>>: Jecel
>: Alaric

[About inefficiency of human-readable protocols
for hardware message transmission]
>> The old PET computer used to talk to its disk drives using BASIC
Well, Commodore has never been considered to have remotely fast drives
(didn't the C64 have 9600bps serial link to the drives?),
which is one reason why people who could afford it preferred the Apple][.
Maybe they considered that there disks were too slow anyway for
the protocol to be optimized.

>> And Postscript was designed as a human
>> readable (and programmable) way to talk to printers.
>
> Sadly a bit complex, though :-(
>
It happens that (except when using Microsoft bloatware),
printing speed, not transmission speed, is what limits the printing process.
Still, people end up sending uuencodish compressed stuff
to postscript printers! What a shame!
BTW, postscript would have been fine if they designed it as a real
clean language, instead of kludge of an interpreter (PS1 lacked GC,
and they used brain-damaged dynamic binding). See Functional-PostScript
for a better way to do this.
   To sum up: the semantics of Postscript is still too low-level,
while its encoding is uselessly "high-level".

>> I consider
>> both these efforts (including using Postscript for windowing -
>> NeWS and NeXTStep) to have had good results, even if they didn't
>> really "catch on".
I think the above reasons are enough for them to prove harmful standards
on the long run. Of course, another reason why they didn't "catch on"
was proprietariness of software. PS and NeWS uselessly added several
hundred dollars to computerware price! Go tell customers that software
designers are so stupid they require that you pay more so that
*they software people* can program more easily! Note that I'm a customer, too!

> Indeed; and the MLM protocol wouldn't be used for EVERYTHING. Just
> tasks on the kind of level that humans are likely to be interested in.
If you remove "actual transmission of data in everyday life" from
things that humans are really interested in (except during debugging phases),
then I agree.

> Anything needing performance can go use byte streams and net blits
> for all I care... I was planning on making a /dynamic/ protocol stack for
> ARGON's IPC, you know!
Good.

>> When the ARPAnet was developed, there were so many different
>> machines that it was obvious the it would take a long time to
>> develop clients/servers for all of them for any new protocol
>> that was proposed. So they always did things so a person could
>> replace a client or server software when needed to get things
>> going. That is why you can use telnet to manually interact with
>> SMPT, FTP (most of it), POP3, nntp, http and so on. Newer
>> protocols have come out of the OSI effort, and have the nasty
>> tendency to have binary encoding that requires complex software
>> to make it work. A little more efficient? Yes. But I hope
>> the Tunes takes into account these older experiences I have
>> mentioned here.
> Commit yourself, Fare... what kind of network protocol DO you want? :--)

Here is how I see things:
when you have to define protocols,
you begin by the semantics that you want to achieve,
(e.g. some kind of lambda-calculus for arbitrary computations),
and you let the system infer an automatic encoding for it
(e.g. the direct specialization of a default standard universal encoding).
Once your semantics is debugged, you can modify the underlying encoding,
optimize it for speed of operation, after having profiled use for your code,
and defined an abstract model of speed (taking into account parsing,
encoding, compression, transmission, decompression, decoding, unparsing,
and whatever "steps" there might be).
   This is where reflection happens: it is the programming style that
ensures that the low-level protocols that you get to define are indeed
(by construction) an (exact or approximate) implementation
of the high-level semantics. And when there is approximation
(such as int32 for an int, etc), the system becomes aware of it,
and can (at least in safe mode) statically or dynamically detect "overflows",
and take measures, instead of going crazy. If the programmer has an idea
of optimization in the encoding, he can go on, try it, and benchmark it,
and possibly batch-benchmark several versions of it, and then choose
knowingly. And of course, every such benchmark is done on real working
optimized code, not on a pseudo "prototype". Several encodings can
co-exist, optimized for various uses (least storage, fastest execution,
or any imaginable measure), and going from one to the other is trivial,
and translators can be inferred automatically by the system
(programmer help welcome for optimization).
   By contrast, non-reflective programmers (most everyone, these days),
even though they may have an abstract model in mind, start *coding*
from the low-level up, and have a hard day proving that it complies
to any high-level semantics. Most likely, they'll never ever try.
Modifying and tweaking encodings is a hell; any slight modification
can introduce a bug; reorganization of the low-level encoding are almost
never even considered, and any piece of code becomes a legacy and a burden
for years to come. Multiple encodings are a hell to deal with, and translators
have to be written the hard way by reading ambiguous paper standards,
then spending hours to debug and comply to actual implementations,
yet having not the slightest insurance of code correctness.
   The pseudo-human-readable internet protocols are just a waste due
to the non-reflective software development methodologies.
In a reflective framework, you might very well start with human-readable
protocols to debug the high-level semantics. Then you'd move to better
encodings. Seamlessly. And if you suspect a problem, you can still
debug everything through a parsing/unparsing filter. And you can seamlessly
modify the underlying protocol in a way that conservatively preserves
the high-level semantics; hence, you do not debug in "layers",
but in "functors". There are no more "protocol stacks",
but "protocol combinations".
   And I disagree with Jecel's allegation that binary encodings ought to
be more complex that text encodings. After all, the later are a particular
case of the former. Anyway, metaprogrammation allows automatic generation
of code to handle (both code and decode consistently) the binary or
text encodings from a high-level specification and an "implementation tactic",
so the programmer is relieved from having to handle the possible underlying
complexity (he still might do it for optimization purposes, if he wants).
   To conclude, let's reuse existing low-level protocols.
Then, we can bootstrap the system, by using SEXP as a default
high-level meta-protocol to further negociate reflective encodings.
And as for using e-mail to exchange Tunes packets, I've already demonstrated
with the TUNESADM script how e-mail+PGP+tar+zsh could be used as a way
to do useful batch RPC, with update of the Tunes server as an example...
Replacing zsh by the Tunes HLL- as the backend would bring what we want,
once we have the HLL- [it would then be time to integrate tar, PGP,
and e-mail into Tunes].

   n-HUFF sed.

PS: I'm also bouncing to the list an old message that I think didn't make it
to it. Sorry if it did...

PPS: I will send a patch wrt 0.0.0.34 this week, but it won't be 0.0.0.35 yet.

== Faré -=- (FR) François-René Rideau -=- (VN) Уng-Vû Bân -=- rideau@ens.fr ==
Join a project for a free reflective computing system! | 6 rue Augustin Thierry
TUNES is a Useful, Not Expedient System.               | 75019 PARIS     FRANCE
http://www.eleves.ens.fr:8080/home/rideau/Tunes/ -=- Reflection&Cybernethics ==