Could we use multiple stacks?

Sean 'Captain Napalm' Conner
Thu, 23 May 1996 03:41:33 -0400 (EDT)

On some network somewhere in cyberspace, Francois-Rene Rideau transmitted:
> First of all, thanks to cooperative multitasking,
> we can require the threads to do all the cleanup job needed,
> so all of our various stack management tricks can be tried out
> on the same system.
  Personally, I still don't see why you love the cooperative multitasking
model so much.	Personally, I think it causes more problems than it solves
(because of CPU starvation, harder to program under (and I've done work under
several cooperative multitasking systems and I've hated them all) and you've
simply shifted the burder of saving state from the system code to the user

  But I've already stated that before.  Oh well ...

> Remember:
> * interrupt-driven threads have their own stack/heaps anyway,
> * usual code is required to checkpoint regularly to allow real-time
>  cooperative threading

  I'm sorry, but this is an oxymoron.  If you have "Real-Time" you have a
preemptive system.

> * checkpointing is costly only if a "switch thread on next checkpoint"
>  flag is set.
> * so that tight loops can be optimized that don't suffer from checkpointing
>  overhead, either
>  = the loop can be unrolled several time, to minimize job vs overhead ratio,

  But the check is per user process.  Bloat.

>  = the loop can be cut into pieces of given length, with checkpoints
>   in between, so that each piece fits checkpointing requirements, or

  Which makes it harder for the user to write code.  The burden of complexity
is shifted back to the user.

>  = a "recovery" routine other than "let me run until next checkpoint"
>   can be declared to the semi-preemptive scheduler,
  Semi-preemptive?  Trying to have your cake and eat it too?

>   so that the program can reach a safe state from anywhere in the tight loop.
  Say what?  Under what conditions will a program have to reach a safe state
in a tight loop?

> Note: it's still time to add floating point management later.
	And here I thought Tunes was a Useful, not Expediant System

  If you don't plan for it now, it'll only be a hack added on later ...

> Now replying to both Eric and Sean...
> >>>:Fare
> >>:Eric
> >:spc
> [Garbage collecting the Dictionary/Heap]
> >   But traditionally, garbage collection has been explicit with FORGET and/or
> > MARKER and not with any automagical system to determine which words are no
> > longer used.  Not all Forth systems allowed nameless words ( :NONAME is a
> > CORE EXTENSION word and it's value is pretty limited (as you have to keep
> > track of the generated xt anyway).
> >
> Garbage collection is just NEEDED in a dynamical system
> (like for instance a distributed multi-user multi-tasking
> interactive development system).
> You can't escape it because there's NO WAY to manually or statically
> determining when an arbitrary word will no longer be used.

  That's a moot point.  Under ANS Forth, you CANNOT delete arbitrary words
without having a greater effect than you think.  For instance, if I do the

		: foo	... ;
		: bar	... ;
		: baz	... ;
		: snafu ... ;

  And I then do:

		forget foo

  I no longer have bar, baz or snafu.  EVERYTHING I've added since foo is
gone when I 'forget foo'.  That's due to the stack-like nature of adding
words.	And when I say everything, I mean everything (including new
vocabularies I may have created).

> >   Correct me if I'm wrong, but all a continuation is is state information
> > such that you can restart the CPU with a saved state.  In a stack based
> > system, the only state you have is the stack pointers, so the fewer stacks
> > you have, the less state you have to save/restore.
> >
> You're wrong **in the general case**, that TUNES must implement:
> you must save not only the stack pointer, but the whole stack.
>    If you have coarse-grained threads, then you can reserve a big-enough
> stack for each of them, and all you need is save the pointer.
>    But if you allow fine-grained threads, and/or continuations that can be
> entered several times, not just once, then saving a stack pointer just
> won't work.
  Uh, Fare ... "coarse-grained threads"?  What the hell are you aiming for?  A
separate thread for each subroutine?  Multiple executions of a continuation?

  Please explain.  And with examples.  I still don't trust theory that much

> >   But stacks do their own GC.  Values are pushed onto it for use, and when
> > no longer needed, popped off.  What's the problem (besides keeping track of
> > dynamically allocated globs of memory)?
> >
> The problem is that the stack may contain roots for GC'ed objects.
> Surely you should read about the implementation of LISP/Scheme/ML systems...
  For some reason Fare, you remind me of one of the professors at school, who
specializes in "Software Engineering".

  He said, basically, that the stack operation of PUSH creates (yes, creates)
an entirely NEW stack, which is a COPY of the old stack, but with the element
added to the top.  The old stack is destroyed.	The operation POP creates a
new stack, and copies the old stack minus the top.  The old stack is then

  Okay, in theory, okay, one could view a stack that way.

  But one would have to be insane to actually implement such a stack.

  Oh, and this professor can't program.  Doesn't even like programming.  Yet
he's an "expert" in software engineering.

  Like I said, for some reason, he seems to remind me of you ...

> >   The problem comes when you try to be all things to all people by allowing
> > no limits whatsoever, so you get code that does:
> >
> >		while(morestuff)
> >		{
> >		  somep = realloc(newsize,somep);
> >		  /* manipulate somep */
> >		}
> >
> >   Where realloc() (or equivilent) will allocate memory, free memory,
> > increase a block, reduce a block and otherwise fragment memory till there's
> > nothing left but to either compact it or restart.
>    I can't see why this would be any problem.
> Surely, higher-level constructs should be used,
> that do it accurately without your bothering.
  Well, fine.  How about Java:

	String s;
	String t;
	while((s = inputstream.readline()) != null)
	  t += s;

  Internally (an approximation in C), the code is doing:

	char *s = malloc(1);
	char *t = malloc(1);
	*s = 0;
	*t = 0;
	  char *tmp;

	  s = readline(inputstream);
	  if (s == NULL) break;
	  tmp = malloc(strlen(t) + strlen(s) + 1);
	  memmov(tmp + strlen(t),s,strlen(s));
	  *(tmp + strlen(t) + strlen(s)) = 0;
	  t = tmp;

  But that's still besides the point.  The point is, you're fragmenting memory
quite bad.  Yes, I'm not loosing memory, but great, the overhead of keeping
the free list may be greater than the amount of free space I have.

  One of the (many) projects I wanted to do was test the memory overhead on
the Amiga memory management routines.  At least with the Amiga I can test it
fairly well, since the structures are layed out and I can follow them in code
to see how much overhead exists.  Someline like, allocate a 256K heap, then do
N initial allocations of random size (say, somewhere between 4 bytes and 1K in
a bell curve distribution), then X loops of allocations and deallocations.

  Once I've stopped, I know how much I've allocated.  I can then walk the heap
free list to see how much I have free.	I can garentee you that

	256k - amount allocated != amount free.

  The difference is how much I'm wasting due to fragmentation.

  And I can assure you that I'll keep very strict bookkeeping, so as not to
loose any memory to a stray pointer.

  GC isn't ONLY about freeing memory.

> >   Imposing limits is somethings a GOOD thing to do.  Some of us don't like
> > GNU because of their not limits to anything anytime philosophy (and their
> > code tends to be bloated anyway ... ).
> I don't see how that no-limit can limit you,

  The longer a Unix process runs (and most of the GNU stuff is Unix based.
Okay, ALL of the GNU stuff starts out Unix based, then is ported to other
systems) the larger it's core image gets.  Basically, allocations just extend
the memory space of the process.  When memory is freed, the memory is still
"allocated" to the process.  Sure, when you malloc(), it's reused, but if
there isn't a large enough free space, the memory space is again, increased.

  Over time, the "No Limits" policy slowly exausts memory.

  I've NEVER seen a Unix up-time of over 100 days (unless something is
corrupted and you see something silly, like 9,398 days and you KNOW the
machine has only been up for a month).

  But that might have more to do with the allocation scheme used in Unix.  But
still, even if you could free pages back to the system (and I'm sure there is
a way, only it's system dependant), you still have the fragmentation problem.

  And I don't care how high or low you code.  If you allocate memory, you'll
have fragmentation.

> unless you make code bloat part of it, which is unfair,
> because removing the limit doesn't cause much bloat per se,
> while managing the limits does cause a lot of it.
>    Allowing limits, perhaps giving default limits in some cases,
> is a good thing. Imposing limits is definitely a NO-NO.

  Then under Tunes, I can edit a file with one terabyte lines, each line being
a terabyte in size?  On a 32-bit system?  Cool!  Because anything less than
that is imposing a limit!


  -spc (Call me a pragmatist if you must, but I tend to view myself as a
	realist ... )

[1]	An example of theory and the Real World (TM) colliding.

	The Theory of Gravity (TOG) states that two objects, reguardless of
	mass, fall at the same rate towards the earth (9.8m/s^2).

	So, in theory, a rock and a piece of tissue, when dropped from the
	same height, will hit the ground at the same time.

	Well, I don't have a rock, so this plastic toy brain I have will have
	to substitute for the rock.  But I have a piece of tissue.  So, if I
	drop them from the same height, they should hit the floor at the same

	So, I'll hold them above my head an arm's length, and drop them.

	Well, I actually did it three times to make sure.

	The toy brain hit the ground first each time.  Less than a second.
	The tissue took about 2, 2.5 seconds to hit the floor.	TOG sounds
	nice, but the Real World (TM) often intrudes upon a perfectly sound
	theory [2].

[2]	I tried the toy brain and a pencil (the pencil having more mass than
	the tissue, but less than the toy brain) and they hit the ground more
	or less at the same time (allowing for experimental differences).  So,
	there's hope that TOG may be sound, but more work has to be done [3].

[3]	I realize that a vaccum chamber (or some place equally devoid of air
	friction) is required for TOG (as it's air resistance that is mucking
	up the results) but again, that's the Real World (TM) for you.