<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>1 University of Economics and Computer Science Vistula in Warsaw 2 Institute of Informatics, The University of Warsaw</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Remarks</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>consequences:</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Informatics, The University of Warsaw</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2012</year>
      </pub-date>
      <volume>119</volume>
      <fpage>3</fpage>
      <lpage>4</lpage>
      <abstract>
        <p>Distributed computer system is a set of autonomous computers connected by a network of transmission channels and equipped with a distributed system software. This is a work environment for a class of tasks of common objective; the network is a communication infrastructure only. Main features of distributed systems dealt with here are: absence of common clock for all computers - no global time provided by external service absence of physical memory common to each computer - message exchange by the network A schematic structure of distributed system with distributed shared memory (DSM) is illustrated in Fig. 1 ([Cz 2012]).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Fig.1
Assumptions and denotations:
the computers work in parallel and are numbered 1,2,...,n
reading and writing is governed by the memory manager of each computer;
•
•
•
•
•
•
•
each message is accompanied by a global timestamp – a pair of computer number and
compensated local timestamp;
there is one critical section assuring mutually exclusive usage of a resource by the
computers;
no hardware or software failure happens during performing the mutual exclusion
mechanism;
computer number i keeps a vector ri = [ri1,ri2,...,rin] of variables rij allocated in its
physical memory; it stores a global timestamp in the component rii when requesting for
the critical section;
computer number i stores in rij a copy of a global timestamp stored by the computer
number j in rjj when it requested for the critical section;
initially all variables rij contain ∞ ;
by min(ri) the least value of the components in ri is denoted;
Protocol of the mutually exclusive usage of a resource by computer number i
set of states of computer number i: Si ={Wi , Bi , Yi , Ri , Pi}
state of computer number i: Qi</p>
      <p>Si
transit Qi</p>
      <p>Q’i : given by the transition graph of the protocol
set of global states: S = S1 × S2 ×…× Sn satisfying: if [Q1 , Q2 , …,Qn]
¬∃i,j: (i≠ j ∧ Qi = Ri ∧ Qj = Rj )</p>
    </sec>
    <sec id="sec-2">
      <title>S then</title>
      <p>state of the whole system: Q = [Q1 , Q2 , …,Qn]
initial state: [W1 , W2 ,… ,Wn] with rij = ∞ in each local state Wi
transit Q</p>
      <p>Q’ there exist Qi</p>
      <p>Si and Q’i
with Qi</p>
      <p>Q’i and if ¬Qj</p>
      <p>Q’j
then</p>
      <p>Qj = Q’j</p>
      <p>S
Si
Example
a message directed to a group of receivers is delivered at different moments; hence
memory locations alloted to this message may hold different content at a time;
reading a memory location may fetch content different from that assigned to this location
by its latest (?) (by global time) actualization (writing);
suspension of the system activity for a period of any memory access would solve the
above problems, thus to achieve data consistency but is unacceptable by reason of
dramaic decrease of system performance; locking of access operations - too restrictive!
need of a compromise between data consistency and effectiveness;
weakening of data consistency – when this does not violate requirements of system’s
main objectives; example: frequent stocked goods actualization (written) of chain-stores
network may be seen (read) different in different countries; data consistency maintenance
– unnecessary effort!
degrees of resignation from consistency to enhance effectiveness – bring on the so-called
models of consistency;
consistency models are usually explained informally – not always univocally; precise
understanding them by users of DSM is one of disadvantages of DSM; hence need of their
formal description;
but formal description of consistency models must be based on a formal model of
computing thus: (1) on actions and states as primary units of computing (2) on
interleaving or non-interleaving („true concurrency”) model of computing;
common descriptions of consistency models in DSM admit actions read and write from/to
memory (fetch and update) as primary units and, informally, non-interleaving computing
model;
but read/write actions are too coarse to formally express memory consistency in the
interleaving model along with retaining efficiency provided by true concurrency model;
partial order of read/write not always is linearizable, i.e. their arrangement in sequence
with retaining result of concurrent computation not always possible;</p>
      <sec id="sec-2-1">
        <title>Example of non-linearizability</title>
        <p>Notation: w(z,α)j , r(z,α)j - operations of writing and reading value α to/from variable
(memory cell) z by computer j. Initially x = 0, y = 0. Scenario of not linearizable computing:
position of r/w ensuing from
values read by processes:
r(y,0)1
r(x,0)2
w(y,2)2
w(x,3)1
(3)
(4)
global time
- global time precedence,
- flow of write result
position of r/w in processes
relative to global time:
r(y,0)1
r(x,0)2
(1)
(2)
w(x,3)1
w(y,2)2
r(y,0)1
r(x,0)2
contradictions!
w(y,2)2
w(x,3)1
r(x,0)2
r(y,0)1
w(x,3)1</p>
        <p>r(y,0)1
w(y,2)2
r(x,0)2
actions w(x,3)1, r(y,0)1, w(y,2)2, r(x,0)2 cannot be interleaved in one sequence so that to
retain result of the concurrent computation;
scheduling protocol for linearization of accesses to DSM not always possible: serial
equivalence to each computing scenario impossible;
difficulty with formal description of consistency issues within the „true concurrency”
model; expressions like „seen by processes”, etc. are intuitive;
instead of time-extensive actions read/write, let us take atomic (timeless) events of their
beginnings and ends: (z,α)j , w(z,α)j , (z,α)j , r(z,α)j ; finer granularity allows for
applying interleaving model.
assumption: a process completes reading and writing a cell with the same value as
commenced; another process may change this value between these events.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Strict consistency</title>
        <p>Informally: a value read from any cell (variable) is either the initial value or the one written
to it prior to the reading, and between these operations no writing to this cell occured. Note:
„prior” is undefined since there is no global system clock. Unrealistic model!</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Denotations:</title>
      <p>V - set of variables used in programs with DSM
D - set of values the variables may assume</p>
    </sec>
    <sec id="sec-4">
      <title>R - set of events of the form</title>
      <p>(z,a)j and r(z,a)j</p>
    </sec>
    <sec id="sec-5">
      <title>W - set of events of the form</title>
      <p>(z,a)j and w(z,a)j
R* - set of all finite sequences of elements from R
W* - set of all finite sequences of elements from W
IL = R* # W* - union of all interleavings of sequences from R* i W*
Let: Q = q1q2…qn IL and qi = r(x,a)f
R (i = 1,2,…,n). Q is strictly consistent iff:
either qi is the first such event in the interleaving Q
or there exists qk = w(x,a)g
k &lt; l &lt; i with b ≠ a.</p>
      <p>W where k &lt; i and there is no ql = w(x,b)h</p>
    </sec>
    <sec id="sec-6">
      <title>W such that</title>
      <p>DSM is strictly consistent iff every interleaving generated by the DSM protocols is strictly
consistent.
Example of not strictly consistent interleaving: w(x,0)2 w(x,1)3 r(x,0)1 r(x,1)2</p>
      <sec id="sec-6-1">
        <title>Memory coherence [D-S-B 1988]</title>
        <p>S = {P1, P2,…, PN} - system of sequential programs running in computers numbered
1,2,…,N with DSM;
Vj – set of variables of program Pj allocated in the local memory of computer number j;
Dj – set of values the variables may assume;
σj: Vj</p>
        <p>Dj - function being a state of the local memory of computer Pj;</p>
        <p>N
V = UV j</p>
        <p>N
D = UD j</p>
      </sec>
      <sec id="sec-6-2">
        <title>Sequential consistency [La 1979]</title>
        <p>Informally: (1) partial order of read/write operations always linearizable; (2) order of events
in each interleaving is identical with order of their occurrences during execution in every
individual program; (3) all locations of the same variable in DSM contain identical value
„seen” by every computer after its actualization (memory coherence).</p>
        <p>Let Qj be a subsequence of an interleaving Q = q1q2…qn IL resulted from removal of
events different from events issued by program Pj .</p>
        <p>Q is sequentially consistent if for every j = 1,2,…,N, events in Qj occur in the order
determined by programmer of the program Pj .</p>
        <p>DSM is sequentially consistent iff every interleaving generated by the DSM protocols is
sequentially consistent and DSM is coherent.</p>
      </sec>
      <sec id="sec-6-3">
        <title>Causal consistency [H-A 1990]</title>
        <p>Informally: causal consistency requires that every two causally dependent write actions, be
„seen” (readout) in every process in the same order. That is, events qi, qj ∈ W , where qi, is a
cause of effect qj , are „seen” by events qk, ql∈R in the same process, then qk must precede ql.</p>
        <p>⊆ RW × RW denote a causality relation defined by means of the
Let RW = R ∪ W and
following primary relations:
(1)
(2)
(3)
p process q
p reading q
p writing q
iff p = q or p occurs before q in the same process;
iff event q∈R terminates reading a certain value α of a variable x</p>
        <p>assigned to x by writing operation terminating with event p∈W ;
iff event p∈R terminates reading a certain value α of a variable x
needed for computing a value β = f(α) assigned afterwards to
a variable y by writing operation terminating with event q∈W ;
(4) events p and q in (2) and (3) may occur in the same or different processes; they are
called cause and effect respectively.</p>
        <p>Causality relation</p>
        <p>is the least relation in the set RW satisfying:
(i)
(ii)
if
p process q or p reading q
p writing q
then p
q
If p
q and
q
r then p
or
r
Events p, q are in the relation of cause and effect if p
q.</p>
        <p>Events p, q are independent (concurrent) if neither p
q nor q
p, write then q || p
Interleaving Q = q1q2…qn IL is causally consistent wrt. relation
elements qi, qj ∈ W with qi qj the following holds:
if for any two
If there are elements qk, ql ∈ R both belonging to the same process and such that
pi reading qk
and
p j reading ql
then
pk process ql
But if for some qi, qj ∈ W with qi qj reverse succession
causally inconsistent.
pl process qk holds then Q is
DSM is causally consistent iff every interleaving generated by the DSM protocols is causally
consistent.
Causal consistency and inconsistency may be represented by diagrams in Fig. 3:</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Causal consistency Fig.3</title>
    </sec>
    <sec id="sec-8">
      <title>Causal inconsistency</title>
      <sec id="sec-8-1">
        <title>PRAM (Pipelined Random Access Memory) consistency [Lip-Sa 1988]</title>
        <p>Informally: weaker than causal consistency model: if some values are written by the same
process in a certain order and read (“seen”) by another process, then the reading must take
place in the same order as writing
Formal definition is similar to causal consistency: causal dependence qi qj should be
replaced with pi process q j only.</p>
        <p>Thus PRAM consistency and inconsistency may be represented by diagrams in Fig. 4.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>PRAM consistency Fig.4</title>
    </sec>
    <sec id="sec-10">
      <title>PRAM inconsistency</title>
      <sec id="sec-10-1">
        <title>Weak consistency [D-S-B 1986]</title>
        <p>While aforesaid models of data consistency ensure access to DSM by respective protocols
supporting a chosen model without user’s intervention, efficiency of the system may
sometimes justify such intervention – for the prize of transparency, one of the desirable
features of distributed systems. The weak consistency model provides users with the so-called
synchronization variables, counterparts of semaphores in centralized systems. Access to them
takes place as in the case of sequential consistency model – in any process they occur in the
order specified by the program evoking this process. The users are provided with means to
create critical sections applying e.g. protocol illustrated in Fig. 2. During execution of such
critical section no other process may access data protected by this critical section until their
actualization is finished. No data access is permitted until all previous accesses to
synchronization variables are completed.</p>
        <p>The sample of five mentioned models of data (or memory) consistency is probably the most
common in applications. But a formalization and analysis of quite long list of these and other
models might propose a challenging research topic. Apart from the discussed above, a list (not
exhaustive) of consistency models contain the following: entry, release, scope, process, cache,
fork, eventual, session, read-your-write, monotonic-read, monotonic-write. Every one is a sort
of a “contract” between the memory management (sub)system and usage of DSM, stating that
if the contracted rules are observed then the memory will behave in accordance with the rules
accepted by the user.
[La 1979] Lamport L., How to Make a Multiprocessor Computer That Correctly Executes
Multiprocess Programs, IEEE Trans. On Computers, 1979 C-28, s. 690-691</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>