LLL: definition of terms

Kyle Hayes kyle@putput.cs.wwu.edu
Mon, 31 Oct 1994 17:35:50 +0800


The following is the LaTeX source for the glossary in my final project
report.  Some of it might not be that useful but there is a lot that
could be reused.  Some of it is specific to my project and should 
probably be dropped.

I thought I would send this to the list in general instead of directly
to Mike Prince in order to stimulate a little discussion.

Best,
Kyle


--------------cut here--------------

\chapter{Definition of Terms}

This section contains the definitions of special terms used in this
document.  These definitions should not be regarded as formal.

\newcommand{\gEntry}[2]{\parbox[t]{1.1in}{\bf #1}\ \ \ \parbox[t]{4.5in}{#2}}


\gEntry{Allocation}{The decision of which resource(s) that is to be used to perform 
	a specific task. Allocation may be done statically (pre run-time) 
	or during run-time. In these cases load balancing and other relevant 
	trade-offs are also done prior to or during run-time respectively. 
	Run-time allocation may also be termed global dynamic scheduling.} \\

\gEntry{Asynchronous\\Execution}{With asynchronous execution the distance between
	related sampling instants can not be guaranteed (i.e. may drift)
	and depends on the characteristics of the local clocks.}\\

\gEntry{Atomic\\Operations}{Atomic means non dividable, and refers to an operation
	that is either performed successfully or not performed at all.  One
	example is atomic broadcast, which thus is either successfully
	performed to all working nodes or not performed at all. This type
	of updating may be required, for instance, when a distributed
	system is to change mode of operation. }\\

\gEntry{Availability}{A measure of the probability that a system is
	operational (functional) over a given interval. For most
	systems, the availability is simply the percentage of time the
	system is operational. For safety-critical real-time systems,
	availability is a relatively irrelevant measure.}\\


\gEntry{Benign\\Failure}{A malfunction that does not affect the correct functioning
	of the system.  The system will continue to generate correct results
	at the correct times.  If viewed as a black box, a system that
	has undergone a benign failure will appear no different than an
	otherwise identical system that has not suffered such a failure.}\\

\gEntry{Blocking\\Mechanism}{A mechanism for temporarily halting the execution of
	a process or processes that seek to enter code or access data
	the access of which must be limited.  When the allowable number
	of accesses is already used, a process wishing to access such
	protected code or data must wait until some (or all) of the resource
	is freed.  The blocking mechanism enforces this wait.}\\
	
\gEntry{Consistency}{Consistency can be defined to mean "the same and the correct
	view of the system state". The state of a distributed system can
	be represented by the collection of distributed states. Consistency
	can be subdivided into Event Ordering, External Consistency, Internal 
Consistency and Mutual Consistency.} \\

\gEntry{Control\\Delay}{The control delay refers the to time between related
	sampling and actuation actions. Note that it corresponds to the 
	computational delay for a single rate control system. For a multirate 
	system the control delay also depends on the different rates,
	processing and communication delays, see end-to-end delay. For
	feedback control it is highly desirable that control delays are
	constant. } \\

\gEntry{Data\\Delay}{The data delay is the time between when a data-item is created,
	either by sampling or by a computation, and when it is available to
	control components in other nodes. The data delay thus includes the
	end-to-end delay.} \\

\gEntry{Data\\Age}{The age of a data-item is defined as the time between the sampling
	instant when the data-item is created, either by sampling or by a
	computation, and the sampling instant when another component starts
	processing the data item, or should have unless sample rejection has
	occurred. The data-age thus reflects the age of data as it is about to
	be used.} \\

\gEntry{DCO}{[from Christer's thesis] A Data communication Object (DCO)
	is an object that is used when two or more nodes in the
	system are communicating.  The class that describes the
	DCO is defined as a subclass to PEO and therefore DCO's
	can also be scheduled.  The DCO performs all application-
	visible services for data transport across nodes.} \\
                        
\gEntry{Dependability}{The trustworthiness of the service delivered by a system such 
	that reliance can justifiably be placed on this service. Thus,
	dependability encompasses the more stringently defined measures 
	reliability, safety and availability (and security, which has to do 
	with protection against intentional attacks on a system).} \\

\gEntry{Dynamic\\Scheduling}{Scheduling that takes place at run-time, as
	opposed to Static Scheduling which takes place before
	run-time.} \\

\gEntry{End-to-end\\Delay}{This delay is the time interval counted from the instant
	when a process delivers a data item to the communication 
	subsystem/operating system, to the instant when the data item is 
	available for use by the receiver/receiving processes. The end-to-end 
	delays in general depend on the following four parameters: Synchronism 
	between processes; Communication delays; Processing time; and Run-time 
	system overhead.} \\

\gEntry{Error}{The manifestation of a fault in a system. Part of a system state
	which is liable to lead to a failure.} \\

\gEntry{Event}{The occurrence of a specific system state. Its validity is time and 
	system state based.} \\

\gEntry{Event\\Controlled\\Updating}{The communication principle in which data is
	updated from producer to consumer(s) only when the data has changed, 
	here referred to as a data update event.} \\

\gEntry{Event\\Ordering}{describes consistency in event ordering, as 
			observed by several processes. Events in other nodes
			are only observable at the times when communication 
			takes place. See Consistency.}\\

\gEntry{External\\Consistency}{This concerns data age, i.e. 
			whether the data item is consistent or not with the
			real-world item that is represents. }\\

\gEntry{Fail-safe\\System}{A system whose failures are only, or to an acceptable
	extent, benign failures.} \\

\gEntry{Fail-silent\\System}{A system whose failures are only, or to an acceptable 
	extent, omission failures. Depending on the nature of the system 
	interface considered, "omission" can mean different things.  } \\

\gEntry{Failure}{Deviation of the service delivered by a system from the specified 
	service.} \\

\gEntry{Fault}{Error cause which is intended to be avoided or tolerated.} \\

\gEntry{Fault\\Tolerance}{Methods and techniques aimed at producing a system
	which delivers a correct service in spite of faults. In other words, 
	faults are prevented from causing failures.} \\

\gEntry{Global}{In a distributed system, all attributes related to the system as a 
	whole (as opposed to the individual modules) are termed "global".} \\

\gEntry{Hardware\\Dependence}{Expresses in what way the application can be
	ported to other hardware without redesigning it.} \\

\gEntry{Internal\\Consistency}{describes semantic couplings between
			data objects.  See Consistency.} \\

\gEntry{Jitter}{Jitter refers to time variations in actual start times of a
	process, as opposed to the stipulated release time. It is very
	important for sensor and actuation components that a maximum allowed 
	jitter is guaranteed.  In the periodic process model the allowed
	jitter can be indirectly specified by using the release time and the 
	deadline. Jitter depends on clock accuracy, scheduling algorithms
	and computer architecture. Input and output jitter can be used to
	relate to the jitter of sampling and actuation processes
	respectively.} \\

\gEntry{Latency}{Time expressing how long it takes for data to move over
	the network.} \\

\gEntry{Link}{The communication hardware of a transputer.  Specifically, the
	hardware connecting one transputer to the other allowing the
	transmission of data bidirectionally.  Each transputer has a
	link that connects to the link of another transputer forming
	a point-to-point network connection.} \\

\gEntry{Local}{Refers to something to be done on a specific node, in
	general not interfering with the activities on other nodes
	(as opposed to global).} \\

\gEntry{Location\\Transparency}{This feature states that the user is not required to
	know the location of an object that it addresses.  It can
	treat remote objects in the same way as local objects.} \\

\gEntry{Membership\\Agreement}{The agreement by each member of a group that all
	other members are members.  Each member of the groups recognizes
	the same set of things as belonging to the group.} \\

\gEntry{Mode\\Change}{Change in the overall behavior of the program(part),
	see Schedule Change.} \\

\gEntry{Mutual\\Consistency}{consistency between copies of data used
			by several processes. Atomic update is related to
			mutual consistency.  See Consistency.} \\

\gEntry{Node}{A cluster of one or more processors that operate with a shared
	memory space.} \\

\gEntry{ORG}{[from Christer's thesis] The Object Relation Graph (ORG) defines
	the associations between the objects that "perform the
	computation" of a use case.} \\

\gEntry{Object\\Mobility}{Term expressing what objects are supported to be
	moved over the network.} \\

\gEntry{OOTI}{Ontwerpers Opleiding Technische Informatica.  2 year course
	followed by the two graduate students from Eindhoven at
	the TUE.} \\

\gEntry{PEO}{[from Christer's thesis] A Parallel Executable Object (PEO)
	is an object that can be scheduled, i.e., an object that
	is visible in the precedence graphs.} \\

\gEntry{Periodic\\Process\\Model}{Parameterized by {S, R, C, D, T} where:
\begin{itemize}
	\item S is the starting time for periodic execution. S can be used to
		specify synchronous execution.

	\item R is the release time which specifies the earliest allowed start
		time of the process each period. R can be used with D to
		specify low jitter on actuation.

	\item C is the execution time of the process each period, and may refer
		to the maximum execution time.

	\item D and T are the deadline and period of the process respectively.
\end{itemize}
} \\

\gEntry{Periodic\\Updating}{The communication principle in which data is periodically 
	communicated from producer to receiver(s), regardless of whether data 
	has changed or not.} \\

\gEntry{Permanent\\Fault}{A fault which, having once occurred, requires manual 
	intervention (such as the replacement of a component) to make it 
	disappear.} \\

\gEntry{Precedence\\Graph}{[from Christer's thesis] A precedence graph is
	a directed acyclic graph which defines the execution order
	between the parallel objects. The precedence graph not
	only defines the causal order, but also the temporal
	characteristics of the computation (i.e., the periodicity,
	release times and deadlines).} \\

\gEntry{Preemption}{temporarily stopping an action (before it has
	completed) to switch to another activity.} \\

\gEntry{Recovery}{The process of resuming normal operation following the
	occurrence of a fault.} \\

\gEntry{Reliability}{A measure of the probability that a system will not fail in
	a time interval of a specified length, given that the system was 
	fault-free at the start of the interval.} \\

\gEntry{Replication}{Technique of copying data to different nodes. This
	technique is often used to achieve lower communication
	intensity, for instance when data is read more frequently
	than it is written.} \\

\gEntry{Sample\\Rejection}{Sample rejection refers to the case where more than one 
	sample is obtained by a control component in between two 
	consecutive executions of the component. The reason is typically
	that the sampling and control component are not synchronized and
	that the end-to-end delays are time-varying. Sample rejection may
	then occur more or less frequently. } \\

\gEntry{Scheduling}{Scheduling is concerned with the determination of when actions
	are to take place according to a specific scheduling policy. When
	one resource is shared by a number of activities the scheduler must 
	determine how the sharing (multiplexing) is to be done. The policy 
	specifies the aim of (e.g. meet deadlines or high average
	throughput) and rules for scheduling. For implementation of the
	policy a number of low level mechanisms may be needed. Further
	characteristics of scheduling policies/algorithms include when the 
	scheduling is done, during run-time (dynamic) or pre run-time
	(static), where scheduling decisions are taken and whether only
	local or global actions are considered.
  	Thus, for example, in global static scheduling, the actual
	scheduling takes place pre run-time and all relevant system
	resources are considered. The actual algorithms may be centralized
	or decentralized. A scheduling policy is only valid for one or
	more specific process models.} \\

\gEntry{Site}{A synonym for Node.  See Node.} \\

\gEntry{Skew}{Skew is used to denote the distance in time between sampling
	instants or period starts belonging to different control loops
	which are part of a multirate control system. In synchronous
	execution the skew is zero or a non zero constant. It is also
	useful to define a skew in asynchronous execution. The skew is
	then a time-varying function. To describe synchronism in a
	multirate system the comparison can be made each major period.} \\

\gEntry{Static\\Scheduling (local/global)}{Static Scheduling takes place
	before the application is run.  In case of global static
	scheduling, all scheduling is done before run-time.  In
	case of local static scheduling, communication between the
	nodes will be scheduled during run-time.  Remark: static
	scheduling does not mean that there can be no schedule
	change invoked during run-time.} \\

\gEntry{Synchronization}{Synchronous means simultaneous. A synchronization
	mechanism can therefore be interpreted as a mechanism which ensures
	that events occur simultaneously according to a common time base.
	This is contrasted with the use of the word in classical
	(non real-time) distributed systems where synchronism refers to
	logical event-ordering. e.g. exemplified by mutual exclusion,
	logical clocks,	rotating privileges, etc. Synchronization is based
	on message exchange and/or a global clock.} \\

\gEntry{Synchronous\\Execution}{A number of periodic processes execute synchronously
	if the distance in time between related sampling instants always is 
	smaller than a known synchronization accuracy constant. The constant 
	distance between related sampling instants is called skew.} \\

\gEntry{Time\\Consistency}{See External Consistency.} \\

\gEntry{Time\\Deterministic}{A system quality which, to a specified extent,
	guarantees that certain timing requirements are always met. In a
	fully time-deterministic system, it is thus known in advance
	precisely what the system will do at any point in time. A non-trivial
	system can never be fully time-deterministic, but time-determinism is
	nevertheless a useful concept.} \\

\gEntry{Transient\\Fault}{A fault which has, or can be made to have, a limited
	duration. Examples are bit flips in memory and in communication 
	media.} \\

\gEntry{Use\\Case}{[from Christer's thesis] A use case is the collection of
	computational steps which take place between a stimulus
	from the environment and a response given to the
	environment.  The use case concept is used both in the
	analysis phase, where only the computation is defined with
	corresponding temporal requirements, and in the design
	phase.  In the design phase the use case is defined by:
\begin{itemize}
\item its period time

\item a precedence graph

\item an object relation graph.
\end{itemize}} \\

\gEntry{Vacant\\Sampling}{Vacant sampling refers to the case where no sample is
	obtained by a control component in between two consecutive
	executions of the component. The reason is typically that the
	sampling and control component are not synchronized and that
	end-to-end delays are time-varying. This means that vacant sampling
	may occur more or less frequently. } \\