seeking LispM virtual memory, memory management, and garbage collection information
Wed, 19 Aug 1998 14:50 -0400

    Date: Wed, 19 Aug 1998 15:36 EDT

    Content-type: text/plain; charset=iso-8859-1
    Content-description: Text
    X-Zm-Decoding-Hint: mimencode -q -u
    Content-Transfer-Encoding: quoted-printable
    X-MIME-Autoconverted: from 8bit to quoted-printable by id NAA08225

    > I apologize in advance if the mention of the "J-word" (Java) in this
    > posting is inflammatory.  I am seeking the above materials in support o=
    > the freeware JavaOS (JOS) project.
    > ...[etc]...
    > Please reply to me, as I am not on the mailing list.
    > Very Truly Yours,

    Sorry if You've gotten no answer, but it seems like the LispOS mailing
    list is quite dead, just now. Periodically it is resurrected, but just
    now the activity is NIL. Personally i'm not against Java, all objections
    against Java i and my Swedish hacker friends/relatives use to make is
    about the commercial hype and that it by definition is forced to be
    executing on a VM. OK, i know that i'm throwing stone in a glass-house
    (swedish telltale) because Scheme itself usually compiles to a VM code,
    but not by definition.

    About pointers to the techniques why LispMs got good performance, 
I missed the first part of this discussion, but some of the things that give
LispMs good performance are:

 1) hardware for parallel tag checking and operation.  The general principle is
    that you assumed the operation was the common case (e.g. fixnum vs fixnum)
    and in the first cycle of computing the answer you also checked the tags
    in a parallel datapath.  If the assumption was wrong, all side-effects of
    the cycle were suppressed and you took a trap.  This shifts the overhead
    of run-time type checking from the time domain to the hardware domain.

 2) stack architecture with good sized stack cache.  Lisp function execution
    blends well with a stack architecture, i.e. you push operands for a function
    call on the stack as they are recursively computed, and pop values off when
    they are returned.  The large stack cache acted like a large, indirectly
    addressed register file, making access to the top 128 or 256 stack locations
    much cheaper than accessing main memory.  (Conventional architectures generally
    don't have that many registers.)

 3) Microcoded support for many operations.  Many operations which couldn't be
    done in parallel hardware were nevertheless done in microcode (e.g. matching
    supplied arguments to callee's pattern, etc.) instead of having to be done
    in macrocode, making the time domain overhead lower than it would be otherwise.
 4) Hardware help for efficient GC. For ephemeral GC, the hardware remembered any
    writes of ephemeral pointers into non-ephemeral pages, so that later the GC
    could consult that memory instead of having to scan the contents of the pages.

There were many others, but those were (in my opinion) the most significant.

								      i can't
    tell, because i'm the least competent in this discussion group, i'm
    simply learning about Boot-Loaders and similar matters. (Besides my main
    interest is rscheme just now). Maybe some other LispOS reader can tell yo=
    about this? [intently listening with my hand behind my ear]

    greetings (med v=E4nliga h=E4lsningar)

    Tomas Kindahl
     Dagens slagord: 'Vafan, =E4r kaffet SLUUUT!!!'
     Email:     Telephone:  +46 13 183927
     Room:      112-1313.2, 112 v=E5n2      Dept:        FNS-TK
     Snailmail: T Kindahl/FNS-TK; SAAB Aerospace AB; S-581 88 Link=F6ping; SW=
     Phone home:  +46 13 171107           Shoe number: 43
     Private URL:
     Homemail:    Tomas Kindahl; Skrivaregatan 11; S-586 47 Link=F6ping; SWED=