=Paper= {{Paper |id=Vol-2907/paper12 |storemode=property |title=Distributed Population Protocols: Naturally! |pdfUrl=https://ceur-ws.org/Vol-2907/paper12.pdf |volume=Vol-2907 |authors=David de Frutos Escrig |dblpUrl=https://dblp.org/rec/conf/apn/Frutos-Escrig21 }} ==Distributed Population Protocols: Naturally!== https://ceur-ws.org/Vol-2907/paper12.pdf
    Distributed Population Protocols: Naturally!

                                 David de Frutos Escrig

    Dpto. Sistemas Informáticos y Computación, Facultad de Ciencias Matemáticas
                 Universidad Complutense de Madrid, Madrid, Spain
                                 defrutos@sip.ucm.es




        Abstract. Classical population protocols manage a fix size population
        of agents that are created by the input population: one agent exactly
        per unit of the input. As a consequence, complex protocols that have
        to perform several independent tasks find here a clear bottleneck that
        drastically reduces the parallelism in the execution of those tasks. To
        solve this problem, I propose to manage distributed population protocols,
        that simply generalize the classical ones by generating a (fix, finite) set of
        agents per each unit of the input. A surprising fact is that these protocols
        are not really new, if instead of considering only the classical protocols
        with an input alphabet we consider the alternative simpler one that
        states as input mechanism a given subset of the set of states. Distributed
        population protocols are not only interesting because they allow more
        parallel and faster executions, but specially because the distribution of
        both code and data will allow much simpler protocols, inspired by the
        distribution of both places and transitions in Petri nets.


1     Introduction

Population protocols were introduced by Angluin et al. [4, 5] as a new proposal
for distributed computation by very limited agents, whose computational power
is based on the interactions between them. These agents are supposed to be
moving in an uncontrolled way in a common arena, so that from time to time they
meet one another. In their meetings they communicate each other their state,
after which they can change it according to the received information. The initial
formalization is extremely simple, reflecting all the ingredients in the description
above, and producing a formalism very easy to use. But at the same time, their
creators foresaw that the main idea could be used in more liberal ways, and so
they also introduced a general framework allowing many generalizations. Indeed,
several of them have been considered interesting by several groups of researchers
during the last years, and many definitions of new classes of population protocols
have appeared, and their computational power and characteristics have been
studied in detail. You can find in [18] a nice very compact informative essay.

    Copyright © 2021 for this paper by its authors. Use permitted under Creative
    Commons License Attribution 4.0 International (CC BY 4.0).
214    D. de Frutos Escrig

    Probably, the first (intended) limitation of the original model was the con-
servation of the number of agents working in the protocol. This is justified by a
possible hardware implementation of those agents as nanocomputers. Definitely,
this restriction, together with the fix limitation of memory of each agent, is the
main reason of the quite limited computational power of the classical popula-
tion protocols. Their creators characterized it as that of semilinear predicates,
or equivalently those definable in Presburger arithmetic [6, 8]. The input compo-
nent at the definition establishes a (finite) input alphabet, and the multisets on
it are the concrete inputs. Besides, each unit of this input creates one agent (in
a certain initial state) and thus the number of agents all along the execution of
the protocol coincides with the size of the input.
    Another main characteristic of the classical population protocols is that the
protocol itself (its code) is uniform and does not know in advance the set of
agents that is working on it. This code is just a list of transitions in which one
agent can participate when meeting with another; the same list for all the agents.
Finally, the agents are assumed to be (and they are indeed, by definition) only
differentiable by their state, so that there are no unique identifiers for them
from scratch. But fairness of the executions is the crucial assumption in order
to get their computational power. Fairness filters out the executions that are
considered to be feasible, and then the protocols can be designed in a way that
all the fair executions from the same input lead all the agents working at it to
the same output, that is the output computed for that input.
    Another justification of fairness arrives when we consider the uniform prob-
abilistic selection of the pair of agents that communicate at each step of an
execution. It is well known that in bounded non-deterministic models, where
the probabilities of the next step to be executed are fixed, fairness is equivalent
to probability 1 computation, so that we can neglect the consideration of unfair
executions since its full set has probability 0 . The introduction of probabilities
also allows the computation of the expected time for stabilization of a protocol,
which provides a natural measure of the efficiency of population protocols.
    By ruling out the imposed restrictions much more powerful classes of pro-
tocols have been introduced. Doty et al. have developed a large collection of
papers where the dynamic creation of agents makes possible the delay of cer-
tain transitions, simply because a few pairs of agents can participate in them.
Under these assumptions fairness and probability 1 are not equivalent anymore,
and this makes the probabilistic model much more powerful, even exceeding the
power of Turing machines (of course in a non-implementable way) [13]. And by
extending the memory of agents, which will be related with the size of the input,
Alistarh et al. [1] have provided protocols that compute more predicates, study-
ing a trade-off between used memory and needed time for stabilization, similar to
the existing one in classical computation. Finally, by removing the (total) sym-
metry, we can get either faster protocols or again a bigger computation power.
Protocols with leaders [7, 15] and agents with unique identification [17] work in
those directions. More about recent extensions of the basic model can be found
in [2].
                                Distributed Population Protocols: Naturally!      215

    In this paper I present a new methodology for the design of protocols in the
basic class, that advocates for (even) more distributed protocols. It is obtained
by removing the one to one correspondence between the units of the input and
the agents participating in the computation of the protocol. Instead, the input
function will introduce at the initial configuration a certain collection of agents
at some preset states. After that they evolve following the rules in the classical
definition.
    I want to stress the fact that I am not introducing another extension of the
basic model, but just a new methodology to define protocols that remain at
that basic model. What I am claiming is that the use of (mainly only) classical
protocols is constraining the designs of people developing basic protocols, so that
they are forced to use quite complicated techniques to construct their protocols
(see for instance [9]). In particular, I will show that the use of my distributed
population protocols have led me to obtain protocols where reverse transitions,
which reverse the effect of others, are totally removed, or their role is quite
limited. These reverse transitions seemed unavoidable to maintain the state space
of the protocols small, but we will see that the distribution of tasks will facilitate
alternative (and very natural) solutions that do not need (or reduce a lot) their
use. Reverse transitions will always delay the stabilization of the protocols, and in
most of the cases they will do it in an absolutely unacceptable (overexponential)
way. Thus their removal is a clear improvement.
    The rest of the paper is organized as follows. In the next section the ba-
sic notation is presented and the definitions of (classical) population protocols
recalled, explaining how I ‘derived’ the one for distributed protocols, starting
from them. In Section 3, I give the formal definition of distributed population
protocol, and the protocol P∧ for the computation of φ = φ1 ∧ φ2 is introduced
as a representative example. Next, in Section 4, I present a different application
of distributed protocols, based in the 2-base representation of numbers, instead
of using the unary representation, as usually done when developing population
protocols. This representation is also used in Section 5, where I present a suc-
cinct protocol for the computation of threshold predicates that appear at the
‘basis’ generating the Presburger predicates. I conclude collecting the main goals
reached at the paper, and announcing several continuations.
    Some simple proofs have been omitted, and others sketched.


2    Preliminaries

Throughout the paper I use the standard mathematical notation. In particular,
Z denotes the set of integers, and N the set of natural numbers (i.e., non-negative
integers). I use a..b with a ≤ b to denote an interval in any of those sets, and
AB for the set of (total) functions from B to A, so that A1..k are the vectors
with k elements in A. Also, (finite) multisets over
                                                  Pa finite set X are the elements
of NX . The size of α ∈ NX is defined
                                 P      as |α|
                                             P  =   x∈X α(x). When the elements
of X are summable, we define q∈α q = x∈X α(x) × x . We say that x ∈ X
is in a multiset α, and write x ∈ α by abuse of notation, if α(x) > 0. Multisets
216     D. de Frutos Escrig

will be represented either by extension, in the form {{x1 , . . . , xk }} or in a linear
way k1 · x1 , . . . , kl · xl , where the elements xi will be different one another, and
ki > 0 . α + β denotes multiset sum, which means (α + β)(x) = α(x) + β(x) , for
x ∈ X. Besides, α ≤ β means α(x) ≤ β(x) for all x ∈ X, and when β ≤ α, α − β
denotes the multiset difference, (α − β)(x) = α(x) − β(x) . Finally, for k ∈ N and
α ∈ NX , we define the product k×α as the multiset given by (k×α)(x) = k α(x),
for each x ∈ X. The empty multiset is denoted by 0.

From classical population protocols to distributed protocols. A popu-
lation protocol (as in [5]) is a tuple P = (Q, I, T, O), where I : X → Q , for
some input alphabet X, is the input initialization function, T ⊆ Q2 × Q2 , and
O : Q → Y is the output function , for some output alphabet Y . Transitions will
be usually presented in the form q1 , q2 → q10 , q20 . We assume that T is total: for
all (q1 , q2 ) ∈ Q2 there exists (q10 , q20 ) with q1 , q2 → q10 , q20 . Whenever this is not
originally the case, the set of transitions is completed with the necessary iden-
tity transitions q1 , q2 → q1 , q2 , that are said to be silent. In this paper we only
consider protocols that compute a predicate, which corresponds to Y = {0, 1} .
    An alternative slightly different formalization used in many papers, even by
their original creators (e.g. in [6],) removes the input alphabet X, taking directly
I ⊆ Q . This means that there is a subset of initial states, so that concrete inputs
are just the multisets in NI . In order to distinguish both definitions, I will call
pure population protocols to those defined using this alternative formalization.
    The semantics of population protocols is the same in both cases. A configu-
ration is any multiset in NQ . We say that a transition q1 , q2 → q10 , q20 is enabled
in a configuration C, when {{q1 , q2 }} ≤ C. Then, it can be fired, leading to
C 0 = C − {{q1 , q2 }} + {{q10 , q20 }} , and we write C → C 0 . An execution of P is an
infinite sequence σ = C0 → C1 → C2 . . . . The output of a non-empty configu-
ration C is defined by O(C) = b if and only if for all q ∈ C we have O(q) = b.
Then, we say that an execution σ has output O(σ) = b , if there exists i ∈ N
such that for all c ∈ N we have O(Ci+c ) = b. However, not all executions count,
we only consider fair computations: σ is fair if
                             ∗
∀D ∈ NQ ( | {i ∈ N : Ci → D } | = ∞ =⇒ | {j ∈ N : Cj = D} | = ∞ ) .
    Now we say that a classical population protocol P is well-specified and com-
putes the predicate φP on any input population α ∈ NX, if for any fair execution
σ starting at the initial configuration C0 = I(α) we have φP (α) = O(σ). Instead,
a pure population protocol P may compute a predicate on the set of input con-
figurations NI : P indeed computes φP , if for any input configuration C0 , and
any fair execution σ starting at C0 , we have φP (C0 ) = O(σ) .
    So, the only difference between these two formalisms is the way the input
is presented: Ordinary P   protocols take multisets α ∈ NX that are converted into
            X
I(α) ∈ N : I(α)(q) = I(x)=q α(x) ; while in the second case we have directly
any multiset C0 ∈ NI as input. It is true that taking X = I ⊆ Q and the
identity function as input function we can see any pure protocol as a classical
protocol, but this requires the use of input alphabets more elaborate that the
usually considered in the classical protocols. However, whenever we are defin-
ing the expressive power of a formalism we cannot assume/impose any ‘natural’
                                  Distributed Population Protocols: Naturally!        217

presentation of the input: we cannot state that some states cannot be taken as
initial. Following this idea, when the expressive power of population protocols
was established their creators considered in [6] pure and not classical ones, al-
though the ‘emulation’ result above proves that classical protocols have the same
expressive power. Even so, it is true that the use of the external input alphabet
looks as very suggestive, since it allows the separation of the description of the
input of the solved problem from the details (the set of states) of the protocol
that will try to solve it. Nevertheless, if we insist on the use of this input alpha-
bet, sometimes this could turn into a too restrictive constraint, instead of being
a guide for the definition of the desired protocols.
   Next I will try to motivate my distributed protocols seeing that the use of
the input alphabet by itself is not the root of the problem. But we need to loosen
up the one to one relation between the units of the input and the agents of the
protocol if we want to design simple and powerful protocols in an easy way.
    Let us recall how it is proved [5, 12, 9] that the class of predicates definable by
some class of protocols is algebraically closed with respect to Boolean operations.
This is done considering classical and not pure protocols. In particular, given a
fix input alphabet X = {x1 , ..., xk }, and a pair of protocols P1 , P2 computing
φ1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ) , a protocol P∧ computing φ1 ∧φ2 is constructed.
The first problem with this approach is that whenever we talk about a predi-
cate we typically assume a certain ‘universe’ (the set of variables of which ‘it
depends’) where it is defined. We need to prove that the conjunction of two
(arbitrary) definable predicates is also definable. But then they could ‘depend’
on two different sets of variables ... Yes, no (formal) problem, we can always (al-
though artificially) enlarge the set of variables on which a predicate (formally)
depends, but if we do this in this framework, it will not be for free! The natural
implementation of each predicate will only receive as inputs those corresponding
to the value of the variables it depends, but if now we enlarge the input alphabet
there will be input units for all the variables on which either φ1 or φ2 depends.
Then, both protocols will receive the agents generated by all the input units,
even if those corresponding to the variables that are not needed in one of them
should not play any role in its computation ... But now they must collaborate
in it, and in particular they must receive the final output when it is computed
... We are obliged by the constraint introduced by the use of classical protocols
to proceed this way. It seems really strange that we assume these costs, at least
without any discussion. At least it is easy to incorporate the additional input
agents in the two protocols as dummy agents, that are only there, only waiting
for the reception of the computed output value, in order to obtain the required
total stable consensus. Since at none of the papers cited above there is a single
word about this annoying technical problem, I cannot speculate about whether
they did not notice it or they considered the costs perfectly acceptable. But for
me this was a strong motivation for looking for another more natural solution
where we will need not to invest on totally useless agents.
   In particular, if φ1 and φ2 depend on two disjoint sets of variables, e.g. we
have φ1 (x1 , ..., xk ), φ2 (y1 , ..., yl ), then we can avoid the introduction of all the
218      D. de Frutos Escrig

dummy agents simply considering P1 working on X and P2 working on Y . In
this way we obtain P∧ based on the aggregation of these two (sub)protocols: we
only need a final phase for the calculation and dissemination of the conjunc-
tion of the values computed by them. Quite natural, quite easy. And definitely
exploding the natural distributed character of population protocols: there were
two independent tasks to solve and we have distributed the needed inputs to do
it in a totally independent way.
    However, this solution seems impossible whenever the two predicates ‘re-
ally depend’ on the same set of variables X. Let us assume that we have now
φ1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ). Even if the states in P1 and P2 remain disjoint,
we cannot define I : X → Q1 ∪ Q2 in a satisfactory way as above, since by
definition we must direct all the agents corresponding to the same variable to
the same state, and this can only correspond to either Q1 or Q2 . But the prob-
lem disappears if we consider pure population protocols instead of classical ones.
If we need to combine P1 = (Q1 , T1 , I1 , O1 ) and P2 = (Q2 , T2 , I2 , O2 ) , we can
assume that Q1 and Q2 are disjoint, and then . . . I1 and I2 are too! This means
that we can define P∧ as above, with Q based on the union of the states of
Q1 and Q2 , taking I = I1 ∪ I2 , and everything works fine! Definitely, in the
pure framework there is no reason for setting aside that simple definition of P∧ .
Free of the constraint of the input alphabet, we have seen that pure protocols
are more flexible (although formally remain equivalent) that classical protocols.
Why should we insist on working ‘mainly’ with the latter?
    We can interpret the solution above in two different ways: the ‘algebraic
interpretation’ renames the variables in φ2 , so that now the variables in both
components are disjoint, and then we can proceed as in our introductory exam-
ple. The second interpretation recovers the role of the (common) input alphabet
X , that has been hidden by our pure protocols. What we have done can be
understood as a replication of the (natural) input, so that both (sub)protocols
will receive a separate copy of it. We have obtained two independent collection
of agents that will compute φ1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ) in a distributed way.
    Let me now reply to a possible criticism at this point: my protocol P∧
when considered in the pure protocols framework is not exactly computing
φ(x1 , ..., xk ) = φ1 (x1 , ..., xk ) ∧ φ2 (x1 , ..., xk ), but . . . more than this! This is
certainly true, but who cares? Indeed, when taking I = I1 ∪ I2 we are stating
that we can use the states in both subsets as (independent) initial states for
the agents participating in P∧ . As a consequence, what we have exactly con-
structed is a protocol for the (general) computation of φ(x1 , ..., xk , y1 , ..., yk ) =
φ1 (x1 , ..., xk ) ∧ φ2 (y1 , ..., yk ), since we can inject a different number of agents
at the two states ‘representing’ each variable in X , at the two sets I1 and I2 .
But we can perfectly claim that this more general protocol is also an imple-
mentation of our desired predicate φ(x1 , . . . , xk ) = φ(x1 , . . . , xk ) ∧ φ2 (x1 , ..., xk ),
simply injecting the same number of agents xi at the two states representing
that argument in I1 and I2 , that’s all.
    Instead, when in [5, 12, 9] the corresponding protocol P∧ is defined, they need
to use Q = Q1 × Q2 , so that the pure version of the protocol would also take
                                       Distributed Population Protocols: Naturally!                219

I = I1 ×I2 , since in order to preserve the one to one relation between the units
defining the values of the variables in X and the agents of P∧ they need to pair
the ‘subagents’ working in P1 and P2 using the notion of parallel composition.
This produces a less distributed protocol and besides the pairing between the
subagents does not reflect any logical connection between the two components,
that could be paired in a totally arbitrary way.
    Let us conclude observing that the proposal above is also exploting the idea
of population splitting, as in [2], although I got to it following a different path.
Although in an implicit way, I am using a typing mechanism that separates the
agents working in a protocol in several disjoint classes in such a way that the
code of the protocol will preserve the type of the agents participating in each
transition. In this way we get a safe modular methodology (for free!), in the
sense typed programming languages provide.


3     Distributed population protocols

Definition 1. A distributed population protocol is a tuple P = (Q, I, T, O),
where I : X → NQ , for some input alphabet X = {x1 , . . . , xm } , is the input
initialization function, T ⊆ Q2 × Q2 and O : Q → {0, 1} is the output function.

    Transitions are presented as for ordinary protocols. Since we only consider
protocols that (possibly) compute a predicate, we have 0 and 1 as output values.
Distributed population protocols compute as the ordinary ones, but considering
the executions that start at   Pthe   initial configurations I(α), where α ∈ NX is the
                                 m
input population: I(α) = i=1 α(xi ) × I(xi ) .
    Next, I give the definition of a simple distributed protocol computing the con-
junction of φ1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ), computed by two distributed popu-
lation protocols P1 and P2 that work on the same input alphabet X.

Definition 2. Given two distributed population protocols P1 = (Q1 , I1 , T1 , O1 )
and P2 = (Q2 , I2 , T2 , O2 ) that work on the input alphabet X = {x1 , . . . , xk } and
compute φ. 1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ), we define P∧ = (Q, I, T, O), taking
Q = (Q1 ∪ Q2 ) × {0, 1}, I(x) = I1 (x)×{0} + I2 (x)×{0}, O((qi , b)) = b, and T
containing the extended transitions
                  0      0                                              0             0
 ( qi,1 , qi,2 → qi,1 , qi,2 ) ∈ Ti =⇒ ( (qi,1 , b1 ), (qi,2 , b2 ) → (qi,1 , b1 ), (qi,2 , b2 ) ) ∈ T
and the communication transitions

    (q1,1 , b1 ), (q2,1 , b2 ) → (q1,1 , O1 (q1,1 ) ∧ O2 (q2,1 )), (q2,1 , O1 (q1,1 ) ∧ O2 (q2,1 ))
where we are denoting by qi,j the states in Qi .

Theorem 1. Given two distributed population protocols P1 = (Q1 , I1 , T1 , O1 )
and P2 = (Q2 , I2 , T2 , O2 ) , that work on the input alphabet X = {x1 , . . . , xk },
and compute the predicates φ1 (x1 , ..., xk ) and φ2 (x1 , ..., xk ), the protocol P∧ =
(Q, I, T, O) defined in Definition 2 computes the predicate φ = φ1 ∧ φ2 .
220       D. de Frutos Escrig

Proof. We decompose the protocol in two layers, as defined in [12], separating
the extended and the communication transitions. The former work on the first
component of the states in Q exactly as the transitions of P1 or P2 , so that
any fair execution of the extended transitions will lead to a configuration C =
C1 + C2 , where each Ci corresponds to one stable configuration of Pi , with
O(Ci ) = φi (x) . Next the communication transitions will produce the value φ(x)
at all the agents, getting a stable configuration preserving this output.      t
                                                                               u
    Once we have defined the conjunction and disjunction of two well-specified
distributed protocols, an immediate induction allows us to compute any Boolean
combination of a collection of protocols P1 , . . . , Pk . I only consider expressions
e with conjunction and disjunction as operators, since negation can be pushed
to the leaves of the expression by applying De Morgan’s laws, and negation
of a protocol is obtained simply interchanging 0 and 1 at its output function.
Note that the list of protocols above corresponds to all the occurrences of the
protocols at the k leaves of the expression, even if this may suppose repeated
occurrences of some protocols. Therefore, the expression contains exactly k −1
binary operators. We need to proceed this way since the repeated application of
Definition 2 cannot take any advantage of repeated occurrences of the protocols.
    Let us see which are the states of the protocol Pe that computes the value
of e, having P1 , . . . , Pk as leaves. As usual, we can label each protocol argument
with the binary path pti from the root of the expression to it. We denote by di the
length of pti . We also denote by pt the operator at any internal node reached
by pt. Then the protocol Pe obtained by iterated application of Definition 2 to
the subexpressions of e can be explicitly described as follows:
Definition 3. Given a tree Boolean expression e having the well-specified dis-
tributed protocols P1 , . . . , Pk at its k leaves, we define P e = (Qe , I e , T e , Oe )
                                                           Pk
taking Qe = ∪ki=1 ( Qpt                  di    e
                               i × {0, 1} ) ; I (x) =
                                 i                                       0
                                                               i=1 Ii (x) , where
                                                                                         0
                                                                                             adds di 0’s to
                          e                                 e
obtain states in Q ; O((qi , s)) = s[1] ; and T contains the extended transitions
   (qji1 , qji2 → qji10 , qji20 ) ∈ Ti =⇒ ((qji1 , s1 ), (qji2 , s2 ) → (qji10 , s1 ), (qji20 , s2 )) ∈ T e
                                                               0                           0
and the communication transitions ((qji1 , s1 ), (qji2 , s2 ) → ((qji1 , s01 ), (qji2 , s02 ) , where
i 6= i0 , we are denoting by qji the states in Qi , and s01 and s02 are obtained from
s1 and s2 by considering the longest common prefix pti,i0 of pti and pt0i and
its length lcpi,i0 , simply turning the (lcpi,i0 +1)-th bit of them into v1 pti,i0 v2
where v1 = s1 [lcpi,i0 +2] when lcpi,i0 +1 6= di , and v1 = O1 (qji1 ) , otherwise; and
                                                                                 0
analogously v2 = s2 [lcpi,i0 +2] when lcpi,i0 +1 6= di0 , and v2 = O2 (qji2 ) , otherwise.
    A couple of examples may clarify this notation. First, we consider i with
pti = (1, 1, 0, 1) and i0 with pt0i = (1, 1, 1, 0, 0), so that pti,i0 = (1, 1) and lcpi,i0 = 2 .
Then, if s1 = (0, 1, 0, 0), s2 = (1, 0, 1, 1, 0) and pti,i0 = ∧, we obtain s01 = (0, 1, 0, 0)
and s02 = (1, 0, 0, 1, 0) , since s1 [4] ∧ s2 [4] = 0 ∧ 1 = 0 . While if pt0i = (1, 1, 1) and
                                              0                             0              0
s2 = (1, 0, 1) we need to consider Oi0 (qji2 ) , obtaining s1 [4]∧Oi0 (qji2 ) = 0∧Oi0 (qji2 ) =
                         0                      0
0 , which produces s1 = (0, 1, 0, 0) and s2 = (1, 0, 0) .
    Although an immediate inductive reasoning provides the correctness proof of
this protocol, if we analyze it in a global way we can observe a layered structure,
                                         Distributed Population Protocols: Naturally!                    221

with as many layers as the height of e . At the first layer the computation of all
the argument protocols is done (possibly decomposed into sublayers, depending
on the characteristics of each one of them), and then the following layers contain
the computations of the subexpressions rooted at each internal node, ‘climbing’
the tree till its root. This observation will play an important role in the definition
of a version of the protocol Pe whose state space size remains moderate in all the
cases. Firstable, I present the general properties of Pe , as it has been defined.

Proposition 1. Given a tree Boolean expression e , the protocol P e in Defini-
tion 3 : a) Is correct, computing the value of e applied to the values computed
                                     Pk
by the protocols Pj ; b) For S = j=1 |Qj | we have |Qe | ≤ S 2h , where h =
height(e) ; c) For all c > 1 , whenever e is c-balanced (which means height(e) ≤
c log k) we have |Qe | ≤ k S 2c ; d) In the (very) worst case we have |Qe | ≤ S 2k .

Proof. a) It is an immediate consequence of Theorem 1 , since P e is obtained by
                                                                Pk
iterative application of Definition 3. b) Obvious, since |Qe | = j=1 |Qj | × 2dj .
c) and d) are immediate corollaries of b).                                      t
                                                                                u

    In this way, |Qe | will have a moderate size and preserves succinctness of the
protocol arguments in the average case, since it is well known that balanced trees
are highly probable. However, we are interested in an universal succinctness result
that will be obtained by reducing the number of bits s added at the definition of
the states in |Qe | . This reduction will be proved correct by applying the following

Fact 1 Any tree of size S (where here size means its number of leaves) contains
some leave at depth log S. This result is immediate, by bipartition.
                                         e
    Then we define the reduced protocol Pred as follows:
                                                                                        e
Definition 4. Given a tree Boolean expression e, we define the protocol Pred               =
(Qered , Ired
          e       e
              , Tred      e
                      , Ored ) taking Qered = ∪ki=1 (Qpt
                                                      i
                                                        i
                                                         ×{0, 1}0..bdi
                                                                       ), where bdi = min({di,
                 e
                             Pk          0
dlog ke}) ; Ired (x) = i=1 Ii (x) , where we add bdi + 1 0’s to obtain states in
Qered ; Orede                              e
                ((qi , s)) = s[0] ; and Tred  contains the extended transitions
  (qji1 , qji2 → qji10 , qji20 ) ∈ Ti =⇒ ((qji1 , s1 ), (qji2 , s2 ) → (qji10 , s1 ), (qji20 , s2 )) ∈ Tred
                                                                                                        e

                                                          0                            0
the communication transitions (qji1 , s1 ), (qji2 , s2 ) → (qji1 , s01 ), (qji2 , s02 ) when di ≤
dlog ke , with s01 [0] = s01 [1] , s02 [0] = s01 [1] and s01 [j] = s1 [j] , s02 [j] = s2 [j] for j >
                          0                             0
0 ; and (qji1 , s1 ), (qji2 , s2 ) → (qji1 , s01 ), (qji2 , s02 ) , now only for i 6= i0 such that
(di−lcpi,i0 < dlog ke) ∧ (di−lcpi,i0 < dlog ke) ∧ ((di−lcpi,i0 < dlogke−1)∨(d0i−lcpi,i0 <
                                 0

dlog ke−1)) , and in such a case s01 and s02 are obtained from s1 and s2 by turning
the (lcpi,i0 −di +dlog ke+1)-th bit of s1 and the (lcpi,i0 −di0 +dlog ke+1)-th bit of
s2 into v1 pti,i0 v2 , where v1 = s1 [lcpi,i0 −di +dlog ke+2] when lcpi,i0 +1 6= di ,
and v1 = O1 (qji1 ) , otherwise; and analogously v2 = s2 [lcpi,i0 −d0i +dlog ke + 2] ,
                                                    0
when lcpi,i0 +1 6= d0i , and v2 = O2 (qji2 ) , otherwise. Where in order to simplify
the reading I have assumed bdi < di and bd0i < d0i , since in the opposite case the
full sequence of bits added in Definition 4 is preserved, and then we work directly
with the (lcpi,i0 +1)-th bit, as done there.
222     D. de Frutos Escrig

    I hope that the cumbersome notation above will not hide the ‘simple’ prun-
ing mechanism of the added sequences, by preserving at most the dlog ke last
bits attached at the agents of each protocol. These ones keep the values of the
subexpressions that correspond to their closest ancestors. Now each agent will
only be involved on those computations, leaving the ones for its oldest ancestors
to the agents of the protocols that are in any of their first dlog ke generations of
successors. Besides, all the agents (q, s) contain one bit (s[0])) for the value of
the full expression e . Their values are obtained by firing the first communica-
                      e
tion transitions in Tred . This is enough, since Fact 1 guarantees that the correct
computation of any subexpression of e is still possible. This can be proved by
induction, proceeding bottom up at the tree, because at both sides of each in-
ternal node there will be at least one agent (by the way, all those computing
any of its closest protocols) that has correctly computed the value of the corre-
sponding descendant. These two agents can communicate, since at least one of
them still disposes of one additional bit for the saving of the computed value,
thus accomplishing the correct computation of the value of the corresponding
subexpression.
                                                                     e
Proposition 2. Given a tree Boolean expression e , the protocol Pred    defined
in Definition 4 : a) Is correct, computing the value of e applied to the values
                                             Pk
computed by the protocols Pj ; b) For S = j=1 |Qj | we have |Qe | ≤ k S ; c)
                                           e
Whenever the subprotocols Pj are silent, Pred is too.
Proof. a) (sketch) We can prove, by induction on the height of each subexpres-
sion es of e that all the agents in the set Ps of arguments of es will get a (partial)
consensus that includes the value of each of them at the supplied input, and those
of each of the subexpressions es0 of es , whose values will remain stable at any of
the added bits to keep those values. Moreover, for ks = |Ps |, by applying Fact 1
some of the protocols Pis ∈ Ps is at most dlog ks e levels below the root of es ,
and since ks < k all the agents working on that protocol contain a bit for the
computation of the value of es . Then they will meet some of the agents including
a bit for the computation of the value of the other child of the root of es , and
applying s the correct value of es will be transferred to the added bit for its
computation at the agents in Qis . In particular, the agents with di ≤ dlog ke
will compute the value of e , and they communicate it to all the agents working
      e
on Pred .
    b) Obvious, since we added at most dlog ke bits to each state of the proto-
cols Pi . c) We can extend the inductive proof above including the silenceless
requirement. Once the agents working at each protocol argument stabilize, the
communication transitions will compute the final values of all the added bits, so
that no later transition will change them anymore.                                  t
                                                                                    u
    One of the facts proved in [9] to obtain their main theorem stating that
it is possible to obtain succinct protocols for the computation of Presburger
predicates (this means that they have a polynomial number of states in the
size of the predicate, when it is presented by means of a Boolean combination
e of threshold and modulo predicates) is that the class of predicates that are
                                  Distributed Population Protocols: Naturally!         223

definable by succinct protocols is closed under Boolean operations. Their quite
involved proof needs to combine a great number of complex techniques, and
those needed to prove the latter result are particularly involved and introduce
reverse transitions that make the obtained protocol no silent. Instead, using
distributed protocols I have proved Proposition 2 above that in particular will
give us succinct protocols for all the Presburger predicates as soon as we have
succinct protocols to compute threshold and modulo predicates.
    In the following sections we will see that the distributed approach also pro-
vides simple ways to reduce the number of states of the protocol that implements
a certain predicate. These are obtained taking profit of the binary representation
of natural numbers, as has been done in [10, 9] .


4    Counting with small agents
In [10] a succinct protocol for counting populations based on the use of states for
accumulating 2k units is presented. It transfers to the framework of population
protocols the binary representation of integer numbers, although only internally
at the protocol, while the input is still presented in unary form. To be precise,
the considered problem was to decide if the size of the input, which means the
number of agents in it, is bigger or equal than a certain natural number n.
    Their protocol has Q = {2k | k ∈ N, k ≤ log n}∪{0, n} and uses the fact that
2 + 2k = 2k+1 to promote the agents to higher powers. In order to detect the
  k

bound value n they introduce a multiway transition that converts the collection
of agents representing the bits of n into one agent n (e.g. for n = 45 , it turns
32+8+4+1 into 45+0+0+0). State n is the only one with output 1, and once
it is reached it turns all the other agents into n by means of communicating
transitions. Besides, in order to avoid that along an execution the value n will
be exceeded (e.g. turning 32+32 into 64+0) without reaching it exactly, they
need also reverse transitions that undo the promotion to higher powers. And in
order to get a classical protocol, they have to encode the multiway transition
generating the state n into a collection of 2-way transitions, something that
requires again the introduction of reverse transitions.
    Next, I present a pure protocol for the counting task, that later I will present
as a distributed one, which will be easily modified to solve another more general
counting problems. I will show that the introduction of a few more states (dou-
bling at most its number) makes possible a very efficient protocol that does not
include any reverse transition.

Definition 5. For each bound n > 0 we define the pure protocol Pncount =
(Qn , In , Tn , On ) with Qn = Qpow
                                n   ∪ {0} ∪ Qapr
                                             n , where
                                                                        k
                                                                        X
  pow          k
 Qn = { 2 | k ∈ 0..(l + 1) }     Qn = {qn | 2 ≤ k ≤ p } with qnk = 2l +
                                   apr     k
                                                                          2ij
                                                                                 j=2
where l = b log nc + 1 , the 2-base digits of n are dl . . . d0 (n = dj . . . d0 ) and
1n = {i | 0 ≤ i ≤ l ∧ di = 1 } taking 1n = {i1 , . . . ip } with i1 = l and ij > ij+1 for
j ∈ 1..(p−1) , the decreasing enumeration of the bits of n.
224     D. de Frutos Escrig

In = {2k | k ≤ l}. Tn = Tnup ∪ Tnapx ∪ Tngoal ∪ Tncomm , where :
   Tnup    : 2k , 2k → 2k+1 , 0      k ∈ 0..l
     apx
   Tn      : qnk , 2ik+1 → qnk+1 , 0 k ∈ 1..(p−1)
   Tngoal : qnk , 2j → n, n          k ∈ 1..(p−1) , j > ik+1
   Tncomm : f, q → f, f              f ∈ {2l+1 , n} , q 6∈ {2l+1 , n}

    The example n = 45 will illustrate this definition. We have Q45 = {1, 2, 4, 8,
16, 32, 64, 0, 40, 44, 45} and some of the transitions in T45 , one for each of its
subsets, are: 16, 16 → 32 ; 40, 4 → 44, 0 ; 40, 8 → 45, 45 ; 64, 1 → 64, 64 .
    Note that, even if we are including in Qn the states in Qaprn , we are maintain-
ing its logarithmic size, since l ∈ O(log n), so that between 2l and n we introduce
at most l interpolations (in the example, only two: 40 and 44), instead of all the
values in the interval (33..45 in the example.)

Proposition 3. For each
                      Plinitial configuration C0 ∈ NIn , C0 = k0 · 20 , . . . , kl · 2l ,
 count                           j
Pn     decides whether j=0 kj 2 ≥ n. Besides, |Qn | ≤ 2 l ≤ 2 log n .

Proof. We decompose the set of transitions into Tnsmall = Tnup ∪ Tnapx , and
Tnlarge = Tngoal ∪ Tncomm . Tnsmall contains the transitions that preserve the sum of
the values of the evolving agents, while Tnlarge includes those ‘adding’ two agents
whose   sum is greater or equal than n, and producing two such values. When
            j                                                                     large
P
   j=0 j 2 < n , it is obvious that we can never apply a transition in Tn
       k                                                                                , so
                                                                            l+1
that for any reachable       configuration C  we    have O(C) = 0 , since 2     ,  n ∈
                                                                                     6   C.
Instead, when j=0 kjl 2l ≥ n , the application of each transition in Tn will pro-
                 P
duce two agents with a total value greater or equal than the sum of the two
evolving ones, till we arrive to an stable configuration C. Whenever it contains
any final state (2l+1 or n) , by applying the communication transitions we reach a
consensus configuration C 0 , with O(C 0 ) = 1. An inductive argument proves that
                                                                              Pg−1
eventually we reach such P a configuration,     based on the fact that 2g > k=0 2k ,
that combined with j=0 kj 2l ≥ n implies that either {{2k , 2k }} ≤ C , for some
k ∈ 0..l , or {{qnl , 2j−1 }} ≤ C , with j ∈ 0..l , so that C would not be stable. t       u

Discussion 1 As explained in the introduction of this section, the presented
protocol is an adaptation of the succinct protocol for counting populations in [10].
A fast comparison could conclude that its (correct) behaviour is based on the
same ‘essential’ principle: simply the binary representation of integer numbers.
However, looking into the details, several important differences arise, all of them
(but a single one) representing important advantages of my solution here. I will
start with the only (minor) disadvantage: here I have |Qn | ≤ 2 log n states, while
the protocol in [10] had at most log n + 2 . Anyway, I maintain |Qn | ∈ O(log n) .
    Next the main advantage, from which emanate several interesting properties.
Pncount satisfies total termination: by the way, revising the correctness proof
above it is easy to quantify the duration of its two phases, seeing that there are
at most n transitions in Tnsmall , and at most n in Tnlarge . After executing them
we reach a silent configuration. So we have an extremely fast silent protocol that
besides always terminates (once silent transitions are not considered).
                                     Distributed Population Protocols: Naturally!             225

    This is a consequence of the fact that I have ‘totally’ avoided both multiway
transitions and also reverse transitions by themselves, such as 32, 0 → 16, 16 .
Obviously, any introduced reverse transition implies no total termination, and if
not ‘explicitly’ somehow avoided, also no silence when a consensus is reached .
This usually causes a very poor efficiency, as it is the case in [10]. For instance,
in order to reach any (pre)successful configuration (those including one n agent)
they need to reach just before the configuration 2 · 2l−1 + (2l − 2) · 0 . Then the
                                                                                2
probability to execute the transition 2l−1 , 2l−1 → 2l , 0 is n(n−1                ).
    But these are far from being the worst news. The real problem is that, as a
consequence, it is very probable that after executing the following two transitions
the reached configuration will be 4 · 2l−2 + (2l −4) · 0 and then to recover our
initial transition above we need two hit events whose probability is fairly smaller
                                              2
(for the same reason!) than (n/2)(n/2−1)             2 . In fact, together with another author,

the authors of [10] have presented in [11] a very nice executable paper where
the reader can find a painful simulation of the protocol (since confirming my
predictions above, she must be very very lucky to see its termination, even for
very small values of n.)
    In fact, these ‘nested’ reverse mechanisms are used by the developers of se-
quential protocols when they want to control that the second phase of a protocol
can start after obtaining (with a certain ‘high’ probability) the result of the first
phase (see e.g. [3]). For that they need to measure the passage of an (expected)
exponential period of time, and this can be done by checking the termination of
a counting protocol somehow ‘similar’ (but simpler) to the one in [10].
    In Section 2 we have seen how we should introduce the agents corresponding
to the input variables that do not appear in a certain predicate when we want to
put together two protocols, for instance for the computation of the conjunction
φ(x1 , ..., xk , y1 , ..., yk ) = φ1 (x1 , ..., xk ) ∧ φ2 (y1 , ..., yk ) . A quite natural way of
introducing the required dummy agents in a protocol that is making some arith-
metic calculation (at the end anyone is doing that, but at the definition of the
protocol this can be ‘hidden’), as it is the case of the succinct counting protocols,
is to introduce them as 0 agents. Obviously, these protocols will (qualitatively)
remain correct after such an addition. However, any superfluous 0 agent will dra-
matically delay (even more!) the stabilization of the protocol, since they trigger
the execution of the reverse transitions undoing the work of the transitions that
produce the progress toward the bound n. Therefore, we should avoid this un-
desired effect introducing the dummy agents as totally ‘external’ to the working
of the protocol, only participating in the dissemination of its output.
    Finally, protocols including any reverse cannot be W S 3 protocols, that are
the protocols that again the authors of [10], together with another forth author,
have shown to be efficiently verifiable in [12]. Instead, with a simple modification
in order to satisfy the second condition in the definition of W S 3 protocols, my
protocol Pncount becomes W S 3 , and thus efficiently verifiable.

   From the pure protocols Pncount we can derive a distributed protocol for the
addition of (l + 1)-byte numbers in the set Bl = 0..(2l+1 − 1) . For each v ∈ Bl
we define the set 1v as 1n in Definition 5, and we take D(v) = {{ 2k | k ∈ 1v }}.
226     D. de Frutos Escrig

Definition 6. For each bound n ∈ N the distributed protocol Pnbad for the addi-
tion of (l + 1)-byte numbers is defined as Pnbad = (Qn , I bad , Tn , On ) , with Qn , Tn
and On as in Definition 5, X = Bl , and I bad : Bl → Qn , with I bad (b) = D(b).

Corollary 1. For each bound n ∈ N the distributed protocol Pnbad decides if the
                              Bl
                                        P
given initial population α ∈ N satisfies b∈α b ≥ n .

    Finally, I present a distributed protocol for the computation of linear sums
a1 x1 + · · · + am xm , with ai , xi ∈ N , for unary representation of the input x . In
a similar way we can obtain the distributed protocol for binary inputs.
Definition 7. For each vector a ∈ Nm and anyP              bound n ∈ N , the distributed
                                                             m
protocol Pna for the computation of linear sums i=1 ai xi is defined as Pna =
(Qn , In , Tn , On ) , with Qn , Tn and On as in Definition 5, and Ina : X → Qn ,
       a

with X = {x1 , . . . , xm }, and Ina (xi ) = D0 (ai ), where D0 (a) is defined as D(a) for
a ∈ Bl , and taking D(a) = {{2l+1 }}, for a ≥ 2l+1 .
                                     m
Corollary 2. For each vector
          a
                            Pam ∈ N , and any bound n ∈ N , the distributed
protocol Pn decides whether   i=1 ai × xi ≥ n .



5     Distributed protocols for Presburger predicates
It is well known that classical and pure population protocols compute exactly the
family of Presburger predicates. Here we recall its modular characterization as
the Boolean closure of the threshold
                                  Ps        and remainder
                                                  Pg         predicates. Both are defined
based on linear expressions         i=1 a i x i −  j=1 bi yi , with ai , bj > 0 . In the first
case we check if the sum is greater or equal than some n ∈ Z , while in the second
we check if it is equal to some value n ∈ 0..(m−1) , modulo some given m > 0 .
     It is not difficult to see that instead of the full family of threshold and remain-
der predicates, we can take as basis for the generation of Presburger predicates
the family of positive thresholds, that correspond to positive bounds n > 0 , and
the bounded positive remainder predicates, that only include positive coefficients
ai , and check if the considered sum is greater or equal to n, instead of equality.
     Next, I present a succinct distributed protocol for the computation of positive
threshold predicates, that reduces the number of needed states, as in Section 4.
Bounded positive remainder predicates can be implemented following the same
ideas, although in that case the role of the transitions 2k − 2k = 0 simplifying
the configurations, must be played by the transitions km + x ≡m x .

 Succinct distributed protocols for positive threshold predicates
    I tried to develop these protocols following the same ideas that for Pna in
Definition 7, but in the beginning this seemed not possible. Why? Because the
(simpler) predicate computed there was true-monotonic: whenever an input pop-
ulation produces an affirmative output, any bigger population also leads to 1.
This is exploited in the design and in the correctness proof of that protocol. In-
stead, the negative part of the linear expression now is clearly ‘anti-monotonic’.
                                    Distributed Population Protocols: Naturally!            227

We need the balancing transitions that use 2k+(−2k ) = 0 to take into account the
contribution of negative agents. If we try to manage approximation agents as in
Definition 7, sometimes we will need to turn back these approximations in order
to recover the required ‘small’ agents that will make possible some balancing
transition. But in order to undo these transitions we need 0-agent partners. It
could be the case that we have run out of them, and then we would be stuck.
     But going back to a protocol that only uses numerical agents with values
2k , as developed in [9], it is possible to circle these difficulties. However, now to
check our goal we will need to detect the simultaneous presence of the agents in
D(n). This is done in [9] by means of a multiway transition. Instead, we find here
another application of our distributed protocols, by choosing an input function
that generates a sufficient, but not too large, quantity of agents to play all the
roles that are needed to get the desired protocol.

Definition 8. For each pair of vectors a ∈ Ns , b ∈ Ng , and bound n > 0 , the
                            thr
distributed protocol Pa,b,n       for deciding the positive threshold predicate φthr    a,b,n
                                                                                               ≡
 Ps               Pg                                       thr           thr thr   thr     thr
( i=1 ai xi − j=1 bj yj ≥ n) is defined as Pa,b,n = (Q , I , T , O ) ,
where taking z = max({ai | i ∈ 1..s} ∪ { bj | j ∈ 1..g } ∪ {n}) and vl = b log zc + 1,
and being dh . . . d0 the 2-base digits of n , and 1n as in Definition 5, we have:
  Qthr = {(2k , b) | k ≤ h, b ∈ {0, 1}} ∪ {(2k , 1) | h < k ≤ l} ∪ {−2k | k ≤ l} ∪
{(0, 0), (0, 1)} ∪ Qthr                    thr
                        lead,n , where Qlead,n = Qbn ∪ Qen
with Qbn = {(i, ch, o) | i ∈ 1n , ch, o ∈ {0, 1} } and Qen = {eli | i ∈ 1n }
    In this case, the set X ∪ Y will be our input alphabet, with X = {x1 , . . . , xs }
and Y = {y1 , . . . , yg } , and we take
           I thr (xi ) = {{(2i , 0) | i ∈ 1ai }}+(dlog ai e − |1ai |) · (0, 0) + Qen
                 I thr (yj ) = {{−2j | j ∈ 1bj }}+(dlog bj e − |1bj |) · (0, 0)
where we define the sets 1ai and 1bj in an analogous way as we defined 1n .
    O(q, b) = b , for q ≥ n ; O(q) = 0 , for q < 0 ; O((i, ch, o)) = o ; O(eli ) = 0 .
    T thr = Tupthr         thr
                     ∪ Tdown           thr
                                 ∪ Tcancel        thr
                                              ∪ Tmove         thr
                                                        ∪ Tlead   ∪ Tcomm thr       thr
                                                                               ∪ Tcheck   .
 thr
Tup      : (2k , b1 ), (2k , b2 ) → (2k+1 , b1 ), (0, b2 )             k ∈ 0..(l−1)
         : (−2k , b1 ), (−2k , b2 ) → (−2k+1 , b1 ), (0, b2 )          k ∈ 0..(l−1)
 thr                     thr
Tdown    : reverse(Tup        )
  thr
Tcancel : (2k , b), −2k → (0, b), (0, b)                                                 k ∈ 0..l
        : (k, ch, b), −2k → (0, b), (0, b)                                               k ∈ 0..l
  thr
Tmove   : (2k , b), elk → (k, 0, b), (0, b)                                              k ∈ 1n
Tlead : l1 , (k, ch, b) → l1 , (2k , b) l1 ∈ { (k, ch, b), elk | ch, b ∈ {0, 1} } k ∈ 0..l
  thr

        : l1 , elk → l1 , (0, 0) l1 ∈ { (k, ch, b), elk | ch, b ∈ {0, 1} }               k ∈ 0..l
  thr
Tcomm propagates any output, either positive or negative, anywhere is possible.
    Finally, to check that all the leaders are busy (detecting a set of positive agents
                                                               thr
whose sum is n), we have the transitions in Ttest                  . We consider the enumeration
in decreasing order of 1n = {i1 , . . . , ip } (e.g. 145 = {5, 3, 2, 0}), and we have:
          (i1 , v, b1 ), (i2 , 0, b2 ) → (i1 , v, b1 ), (i2 , 1, b2 )
          (ik , 1, b1 ), (ik+1 , 0, b2 ) → (ik , 0, b1 ), (ik+1 , 1, b2 ) 1 < k < (p−1)
          (ip−1 , 1, b1 ), (ip , 0, b2 ) → (ip−1 , 0, b1 ), (ip , 1, 1)
228     D. de Frutos Escrig

This corresponds to the general case of the definition: p > 2 . When p = 1 , which
corresponds to n = 2h , we can simply put the value 2h in the set of states with fix
output 1 , and then we can avoid the leader mechanism and the test transitions.
While when p = 2 , we simply take (i1 , v, b1 ), (i2 , 0, b2 ) → (i1 , v, b1 ), (i2 , 1, 1) .

    Let me start by clarifying that even if I talk about leaders, they are not such
in the usual meaning: they are ordinary agents created by each unit of some of
the elements in the input alphabet. Next we will see that the protocol includes a
mechanism for the selection of unique leaders between the initial population of
leader agents, as in [15, 3]. This is why I keep this appellative in the definition.
    The first component of the protocol is the balancing procedure that will
                                                                              thr
compensate equal agents of different sign, by means of the transitions in Tcancel   .
                                                                   thr
In order to get the matching pairs we have the transitions in Tdown , that could
require the presence of 0-agents. This is why we introduce those agents in the
definition of I thr . They will be (more than) enough to do their task.
    Fairness will guarantee that eventually we will have either no negative or
no positive agents. In the latter case, a consensus on 0 output will be reached,
                                     thr
by applying the transitions in Tcomm      . In the former, the protocol must check
whether the remaining positive units are n or more. The idea (already used
in [9]) is that once only non negative agents remain, by means of the transitions
     thr          thr
in Tdown    and Tup    we can distribute the remaining units in such a way that
we can reach a configuration C 0 , including a set of agents corresponding to the
                                   thr
elements of 1n , if and only if φa,b,n (x, y) = 1 .
     We need a procedure to detect this situation. We do it by means of the tran-
             thr                                                     thr
sitions in Ttest , assuming that previously the transitions in Tlead      have selected
a family of unique leaders, one for each value in 1n . It will be enough to check
that all these leaders are busy to know that the current configuration contains
positive agents whose total value is greater or equal than n. This works thanks to
                      thr
the transitions in Tmove  , that after collecting 2k positive units of the configura-
              k
tion in one 2 -agent, transfer them to the corresponding leader agent when it is
empty, turning it into busy. Any successful test will ‘mark’ the output field of the
ip -leader agent turning it into (ip , 1, 1) . Next, this 1-output will be broadcasted
                            thr
using the transitions in Tcomm   .
     Certainly, when we check the leaders ‘too early’ we could get a hurried pos-
itive, that however will not cause any problem, since whenever the balancing
procedure discussed above will terminate, either we have no positive agent, and
                                                                            thr
then we reach a stable 0 consensus by executing the transitions in Tcomm        ; or no
negative agent remains in the configuration. When this is the case we can (and
we will) fill all the leader agents at the same time if and only if the value of
the configuration is greater or equal than n. In the positive case, any test after
all the leader agents are busy will succeed, and then the produced 1-output will
reach everywhere and the obtained consensus will remain stable. While in the
negative case, no test can succeed since the busy leader agents ‘crossed’ by the
test will remain busy forever, since no negative agent remains to empty them.
As a consequence, any successful test now implies that all the leader agents are
                                Distributed Population Protocols: Naturally!     229

busy at the same time, and therefore the value of the configuration must be
greater or equal than n , against our current hypothesis.
    Note that in the last reasoning above it is crucial that there is no negative
agent around, since as long as they are present they could ‘inadvertently’ cause
that a test could succeed, even when the total value of the positive agents at
the configuration is less than n . This malfunction could occur because some
positive units already checked by a test could be ‘liberated’ by a negative agent
that expects to remove them by executing a later cancellation transition, but
instead those liberated positive agents could fill any of the leader agents still to
check, thus cheating the testing procedure since the same units would be used
twice. But fortunately, this cause no problem in our protocol, as explained above.
    If we compare the solution with that in [9], we can see that I do not need
any lock for checking that n positive units have been collected. In [9] success is
claimed by the firing of the multiway transition, once and again, but this was
only possible because they use a special kind of (standard!) leaders, that they
call voters. Once more, my proposal here only uses regular agents, thanks to the
flexibility of distributed protocols, also avoiding the centralization imposed by
those voters.
    Perhaps you could think that using multiway transitions (that later could
be turned into regular ones) I could avoid the introduction of my leader agents.
This is not possible: if we look for the agents corresponding to 1n anywhere
at the configuration, we could succeed in several directions. But it would be
impossible to avoid that any agent that has been affirmatively tested, but then
is immediately converted into one (0, 0) agent by one cancellation transition,
could be next turned into (0, 1) by a communication with any 1-output agent.
But if this happens just when the last negative agent has been cancelled, turning
the value of the configuration below n, the broadcasted 1 could not be corrected
anymore, thus producing a wrong output. This cannot happen when the removed
positive agent is a busy leader, since empty leaders always claim 0 as output.
    Certainly, as brightly discussed in [18], leaders are always required to generate
any ‘asymmetric’ behaviour of the agents in a protocol, and they always intro-
duce a certain coordination, which requires some centralization, but that implied
by the leaders in this protocol seems not big, since they are only involved in a
small part of the transitions of the protocol.
                                                                                thr
    Let me next just to present the theorem that states the correctness of Pa,b,n   .
Its formal proof would just present in a more formal way the informal reasoning
above. However, I will include below some technical results that are needed in
that formal proof.
Theorem 2. For each pair of vectors a ∈ Ns , b ∈ Ng , and any bound n > 0 ,
                          thr
the distributed protocol Pa,b,n decides the positive threshold predicate φthr
                                                                          a,b,n
                                                                                ≡
 Ps            Pg
( i=1 ai xi − j=1 bj yj ≥ n ) .
                                                       thr
   First, we define the value of a configuration C of Pa,b,n as the sum of
all its numerical agents, adding 2i for each busy leader (i, v, b) ∈ C . All the
transitions of the protocol preserve the value of the configuration, so that for
230     D. de Frutos Escrig

any reachable          Pg C from any initial configuration I(x, y), we have
           Psconfiguration
value(C) = i=1 ai xi − j=1 bj yj .
                                                thr
Definition 9. We say that a configuration C of Pa,b,n is simple positive (resp.
negative), corresponding to 0 ≤ v < 2l+1 (resp. −(2l+1 ) < v < 0) when for each
k ∈ 1v (resp. 1−v ) we have a single agent (2k , b) ∈ C and C((0, t)) + C((0, f )) =
|{k : k < l ∧ k 6∈ 1v }| (resp. ∈ 1−v ) and C(q) = 0, for any other q ∈ Qthr .
                                                                           thr
    This definition captures the fact that along the computations of Pa,b,n      we
have enough 0-agents around, since the initial configurations can be decomposed
into a disjoint union of simple configurations, by applying the definition of I thr .
                                                                  thr
Lemma 1. Any simple positive (resp. negative) configuration C of Pa,b,n can be
transformed, by applying transitions in Trad thr
                                                 , into another configuration C 0 =
C + {{(1, b)}}, (resp. (−1, b) ) where C either only contains agents (0, b0 ) , or
  00                                      00

is also a simple positive (resp. negative) configuration C of the protocol.
Corollary 3. From any initial configuration C0 of Pa,b,n   thr
                                                                we can reach C 0 that
contains no (q, b) with q < 0 , or no (q, b) with q > 0 , and value(C 0 ) = value(C) .
                                                    thr
Proposition 4. For any initial configuration C0 of Pa,b,n with value(C0 ) ≥ n ,
                ∗                          ∗
if we have C0 → C 0 , we also have C 0 → C 00 , where C 00 contains a full set of
busy leaders, i.e. a busy leader agent for each k with dk = 1 .
Proof. (Of Theorem 2) Any fair execution will reach a configuration which con-
tains one single k-leader, for each k ∈ 1n . Then, fairness also guarantees, as
stated by Corollary 3, that we will reach a configuration which either con-
tains no negative or no positive agent. Next, Proposition 4 guarantees that we
will
Ps reach a configuration
               Pg            with single leaders, all of them busy, if and only if
   i=1 ai xi −   j=1 bj yj ≥ n . These situations will be stable. In particular, in
                                                             thr
the affirmative case the execution of the transitions in Ttest   will fix 1 as output
                                              thr
of the leaders, and then the transitions in Tcomm will communicate this value to
all the other agents. In the opposite case the set of busy leaders will also stabilize
and some leader agent will remain empty forever, so that in the following any
test will fail when reaching it. Then, the empty leaders will communicate their
                                     thr
0-output using the transitions in Tcomm    and the consensus on the output 0 will
stabilize.                                                                          t
                                                                                    u


6     Concluding Remarks and some Future Work
Only after studying in detail most of the great papers in the References below I
got the inspiration to discover the narrow bottleneck that one to one relationship
between the units of the input and those of the input configuration, is imposing
to classical population protocols. My distributed protocols, that in fact are only a
particular case of pure protocols, are in my opinion a simple and elegant proposal
that makes much easier the development of simple and more efficient protocols.
                               Distributed Population Protocols: Naturally!    231

    So, the introduction of my distributed protocols does not really mean a gen-
eralization of the definition of population protocol, but just the confirmation of
the fact that using this suggestive class of pure protocols, we can directly man-
age specialized agents, without (formally) contradicting the uniformity of the
(global) code that governs the behaviour of all the agents of the protocol. The
agents can be now designed in a way that, depending on their initial state, each
one can only reach a limited set of states (its type!). Then we can (in practice)
specialize the code for each type, simply including in it the transitions that have
as first agent one member of the type, although of course the met agent could
be any.
    Since I did not want to generalize the notion of population protocol, I have
maintained the uniform participation of all the agents in the dissemination of
the global (final) output. However, you can find in Definitions 3, 4 two clear
examples of structured protocols, based on a family of subprotocols (those cor-
responding to all the internal nodes of the expression e) ‘locally’ conceived as
ordinary protocols, but whose (local) output must be internalized. This is done
in a way that only the agents participating in each subprotocol compute the
corresponding local output, and it is only transmmited to other agents as far as
they will need it to accomplish their tasks . This idea could be transferred to
the global output, so that only the ‘principal’ agents will compute and transmit
the final output, while the ‘auxiliary’ agents only would help doing its part, and
even could disappear after accomplishing it, following the old notion of coroutine
for parallel programming. I plan to continue the exploration of these structuring
ideas, looking in particular for a higher level programming language for popula-
tion protocols. In this case [16, 2] are interesting starting points.
    As continuation of this work, based on [12], I have already shown that dis-
tributed protocols will also be easy to prove correct, and once more, even easier
than classical protocols, specially when the corresponding proofs will be done
by hand, but probably also when done in a mechanical way, if the tools can
be adapted to take advantage of the modularity of protocols. Besides, I have
just submitted [14] where I complete the development of succinct distributed
protocols for Presburger predicates, as in [9], providing an alternative succinct
threshold protocol, since finally I found the way to do it following the ideas
in Definition 5, thus obtaining a silent protocol totally free of reverse transi-
tions. And a more involved application of these ideas also produced the required
succinct remainder protocol.



Acknowledgement. I sincerely thank Javier Esparza, that introduced me to Pop-
ulation Protocols, during my stay at TU Munich in April 2017. I also appreciate
the indications of the referees of a previous version of this paper. Following them
the presentation has been improved, making the paper more readable. This re-
search was partially supported by project PID2019-108528RB-C22 and by Co-
munidad de Madrid program S2018/TCS-4339 (BLOQUES-CM) co-funded by
EIE Funds of the European Union.
232     D. de Frutos Escrig

References
 1. Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, and Ronald L.
    Rivest. Time-space trade-offs in population protocols. In 28th ACM-SIAM SODA,
    pages 2560–2579. SIAM, 2017.
 2. Dan Alistarh and Rati Gelashvili. Recent algorithmic advances in population
    protocols. SIGACT News, 49(3):63–73, 2018.
 3. Talley Amir, James Aspnes, David Doty, Mahsa Eftekhari, and Eric E. Severson.
    Message complexity of population protocols. In 34th DISC, volume 179 of LIPIcs,
    pages 6:1–6:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
 4. Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta.
    Computation in networks of passively mobile finite-state sensors. In 23rd ACM
    PODC, pages 290–299. ACM, 2004.
 5. Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Per-
    alta. Computation in networks of passively mobile finite-state sensors. Distributed
    Comput., 18(4):235–253, 2006.
 6. Dana Angluin, James Aspnes, and David Eisenstat. Stably computable predicates
    are semilinear. In 25th ACM PODC, pages 292–299. ACM, 2006.
 7. Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by popula-
    tion protocols with a leader. Distributed Comput., 21(3):183–199, 2008.
 8. Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computa-
    tional power of population protocols. Distributed Comput., 20(4):279–304, 2007.
 9. Michael Blondin, Javier Esparza, Blaise Genest, Martin Helfrich, and Stefan Jaax.
    Succinct population protocols for presburger arithmetic. In 37th STACS, volume
    154 of LIPIcs, pages 40:1–40:15. Schloss Dagstuhl - Leibniz-Zentrum für Infor-
    matik, 2020.
10. Michael Blondin, Javier Esparza, and Stefan Jaax. Large flocks of small birds: on
    the minimal size of population protocols. In 35th STACS, volume 96 of LIPIcs,
    pages 16:1–16:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018.
11. Michael Blondin, Javier Esparza, Stefan Jaax, and Antonı́n Kucera. Black ninjas
    in the dark: Formal analysis of population protocols. In 33rd ACM/IEEE LICS,
    pages 1–10. ACM, 2018.
12. Michael Blondin, Javier Esparza, Stefan Jaax, and Philipp J. Meyer. Towards
    efficient verification of population protocols. In 36th ACM PODC, pages 423–430.
    ACM, 2017.
13. Rachel Cummings, David Doty, and David Soloveichik. Probability 1 computation
    with chemical reaction networks. Nat. Comput., 15(2):245–261, 2016.
14. David de Frutos Escrig. Succinct and fast distributed population protocols. sub-
    mitted, 2021.
15. David Doty and David Soloveichik. Stable leader election in population protocols
    requires linear time. In 29th DISC, volume 9363 of LNCS, pages 602–616. Springer,
    2015.
16. Bartlomiej Dudek and Adrian Kosowski. Universal protocols for information dis-
    semination using emergent signals. In 50th ACM SIGACT STOC, pages 87–99.
    ACM, 2018.
17. Rachid Guerraoui and Eric Ruppert. Names trump malice: Tiny mobile agents can
    tolerate byzantine failures. In 36th ICALP Proc. Part II, volume 5556 of LNCS,
    pages 484–495. Springer, 2009.
18. Othon Michail and Paul G. Spirakis. Elements of the theory of dynamic networks.
    Commun. ACM, 61(2):72, 2018.