<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Autocorrelation Function Characterization of Continuous Time Markov Chains?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>G. Rama Murthy</string-name>
          <email>rama.murthy@mechyd.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D.G. Down</string-name>
          <email>downd@mcmaster.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>, and A. Rumyantsev</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Engineering, Ecole Centrale School of Engineering, Mahindra University</institution>
          ,
          <addr-line>Bahadurpally, Hyderabad</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Department of Computing and Software, McMaster University</institution>
          ,
          <addr-line>Hamilton</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Institute of Applied Mathematical Research, Karelian Research Centre of the Russian Academy of Sciences</institution>
          ,
          <addr-line>Petrozavodsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Petrozavodsk State Universisty</institution>
          ,
          <addr-line>Petrozavodsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We study certain properties of the function space of autocorrelation functions of unit, as well as nite state space Continuous Time Markov Chains (CTMCs). It is shown that under particular conditions, the Lp norm of the autocorrelation function of arbitrary nite state space CTMCs is in nite. Several interesting inferences are made for point processes associated with CTMCs.</p>
      </abstract>
      <kwd-group>
        <kwd>Unit Continuous Time Markov Chains</kwd>
        <kwd>Function</kwd>
        <kwd>Integrability Conditions</kwd>
        <kwd>Autocorrelation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Many natural and arti cial phenomena are endowed with non-deterministic
dynamic behavior. Stochastic processes are utilized to model such dynamic
phenomena. There are a number of applications, e.g. in physics [1,2], where the
stochastic model can be in only one of two states, modeled by the so-called
dichotomous stochastic process (e.g., dichotomous Markov noise). In particular,
the notion of a unit random process fX(t); t &gt; 0g, i.e., a random process whose
state space consists of two values, f 1; 1g, arises naturally in many applications
e.g. bit transmission in communications systems, or detection theory. In the
latter case, some random process, fY (t); t &gt; 0g, is provided as input to a threshold
detector, where the output X(t) is the sign of Y (t), i.e. X(t) = sign(Y (t)). Thus
fX(t); t &gt; 0g is a unit process and can be shown to be Markovian under some
conditions on Y (t). More generally, quantization of a general random process
? The rst author is supported by internal funding for research and development from
Mahindra Ecole Centrale School of Engineering, Mahindra University, Hyderabad
(for the year 2018-19). The second author is supported by the Natural Sciences and
Engineering Research Council of Canada under the Discovery Grant program. The
third author is partially supported by RFBR, projects 19-57-45022, 19-07-00303.
leads to a nite-state random process, which is often Markovian, so the study of
more general nite-state processes is also of interest.</p>
      <p>The characterization of the autocorrelation function,</p>
      <p>R(t; t + ) = E[X(t)X(t + )];
; t &gt; 0;
of a unit process fX(t); t &gt; 0g, is considered an important problem [3]. Several
interesting properties of such autocorrelation functions are studied in [4,5].</p>
      <p>Wide sense stationary (or even strictly stationary) random processes
naturally arise as stochastic models in a variety of applications. They also arise in
time series models (AR, ARMA processes) of natural and arti cial phenomena.
The autocorrelation function of a wide sense stationary process does not depend
on t, that is,</p>
      <p>R( ) := R(t; t + ) = E[X(0)X( )]:
In many interesting models, the autocorrelation function, R( ) is integrable and
hence the power spectral density (the Fourier transform of R( )) exists.</p>
      <p>With this in mind, we are motivated to study the function space of nite
state Markov processes. Masry [5] has studied the functional space properties of
stationary unit random processes. However, the study of the integrability of R( )
was not undertaken. To the best of our knowledge, the Lp-norm of R( ) for nite
state Continuous Time Markov Chains (CTMCs) has not been investigated. In
this paper, we determine conditions under which the autocorrelation function is
not integrable, and by extension conditions under which the Lp-norm of R( )
approaches zero as p ! 1.</p>
      <p>To put our work into context, there is related work in three directions: the
characterization of autocorrelation functions of random processes, the
characterization of point processes, and the use of autocorrelation properties in the
analysis of stochastic models, in particular the analysis of queues. For the
characterization of autocorrelation functions, we point the reader to work in time
series analysis [6,7,8] and in telecommunications [9]. Point process
characterization has been studied in [10,11,12]. Properties of autocorrelation functions have
been employed to determine appropriate simulation strategies for queues [13]
and is a feature of modelling arrival tra c to queues, using Markovian Arrival
Processes, see [14], for example.</p>
      <p>This paper is organized as follows. In Section 2, the autocorrelation function
of a unit CTMC is computed and the structure of the function space is studied. In
Section 3, the autocorrelation function of a nite state space CTMC is computed
and the niteness of its Lp norm is discussed. It is shown that under some
conditions, the autocorrelation function is not integrable. In Section 4, various
interesting inferences are made for point processes. Finally, the paper concludes
in Section 5.</p>
      <p>Auto-Correlation Function of Homogeneous Unit
CTMC: Integrability
In this section we consider a homogeneous Continuous Time Markov Chain
(CTMC) fX(t); t &gt; 0g with the state space J = f 1; 1g and generator
matrix</p>
      <p>Q =
;
;
&gt; 0:
We assume that the resulting stochastic process is wide sense stationary (and
not necessarily strictly stationary). Although results on R( ) can be found in
the literature [1], it is instructive to show the evaluation of R( ) with the help
of spectral representation.</p>
      <p>Since fX(t); t &gt; 0g is a unit random process,
R( ) = P fX(0) = X( )g</p>
      <p>
        P fX(0) 6= X( )g = 2P fX(0) = X( )g
1:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
It remains to compute P fX(0) = X( )g. Note that
      </p>
      <p>P fX( ) = X(0)g =</p>
      <p>
        X P (X( ) = jjX(0) = j)P (X(0) = j):
j2J
The conditional probabilities in (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) are computed using the transient probability
distribution ( ) of X( ) at time &gt; 0, whereas the values P (X(0) = j); j 2 J ,
are the components of the initial probability distribution, (0). For a
homogeneous CTMC with nite state space,
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
where also we have GF = I. Hence it follows that
      </p>
      <p>Qgi =</p>
      <p>igi; i = 1; 2:
fiQ =</p>
      <p>ifi; i = 1; 2:
Q = G</p>
      <p>( 0+ ) 00 F;
eQ
= e ( + ) g1f1 + g2f2:</p>
      <p>( ) = (0)eQ ;
where eQ = Pi1=0 Qi i=i! is the matrix exponential (see e.g. [15]). Using the
Jordan canonical form [16], eQ is computed below.</p>
      <p>
        Since Q is a rank one matrix, the eigenvalues are 1 = ( + ); 2 = 0.
Denote the corresponding right eigenvectors by fg1; g2g (column vectors) as the
solutions of
Let the left eigenvectors (row vectors) ff1; f2g be the solutions of
Compose columnwise the matrix G of right eigenvectors, and let F contain the
left eigenvectors as rows. Then since Q is diagonalizable,
Since g1 is unique up to multiplicative constant, it follows from (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) that
At the same time, since 2 = 0, (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) is the condition for Q to be a generator
matrix, that is,
where 1 is the (column) vector of ones. The left eigenvectors are obtained after
some algebra from F G = I as follows:
It is interesting to note that since 2 = 0, it follows from (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) that the second
left eigenvector, f2, is indeed the steady-state probability vector = (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) of
the process fX(t); t &gt; 0g, that is, the stochastic vector solving Q = 0, i.e.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Thus, it follows from (6) that where</title>
      <p>
        = f2 =
eQ =
+
Q
is the matrix that contains the steady-state vector
ergodicity is observed from (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) in the limit,
in its rows. Interestingly,
Noting from (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) that (0)
= , from (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) and (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) we have
= lim eQ :
      </p>
      <p>
        !1
( ) =
(0)Q
e ( + )
+
:
Equation (
        <xref ref-type="bibr" rid="ref10">10</xref>
        ) demonstrates the exponential speed of convergence of ( ) to
equilibrium , given in (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ), as ! 1. Finally, using (
        <xref ref-type="bibr" rid="ref10">10</xref>
        ) in (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), we obtain
P fX( ) = X(0)g = (0) T
e ( + )
+
(0)q;
where q is the negative (column) vector of diagonal elements of Q. Recalling (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
denoting c = 2 (0) T 1 and d = +2 (0)q, an explicit expression for R( ) is
in turn
      </p>
      <p>
        R( ) = c
de ( + ) :
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
(
        <xref ref-type="bibr" rid="ref10">10</xref>
        )
      </p>
    </sec>
    <sec id="sec-3">
      <title>We now turn to some interesting special cases. Equilibrium case: assume (0) = . In this case ( ) = , and the coe cients in (11) are c = 2</title>
      <p>
        where EX = [ 1 1]T is the steady-state mean value of the process. Thus, the
ergodicity result follows from (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ):
lim R( ) = EX(0)EX:
!1
It can be seen from (14) that R(0) = 1 and then the autocorrelation is
monotonically non-increasing until nally R(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = c. As expected from (12), in this
case c = (EX(0))2.
      </p>
      <p>Further, in the symmetric case = , from (14) we have</p>
      <p>R( ) = e 2 :</p>
      <p>
        Uniform initial probability: Let now (0) = [1=2 1=2]. In such a case, in (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ),
the constant c = 0 and d = 1, thus
      </p>
      <p>R( ) = e ( + ) ;
which again gives (15) if = .</p>
      <p>We conclude the section with a lemma that presents one possible
characterization of the function space of autocorrelation functions of a unit CTMC.
Lemma 1. Consider a unit CTMC fX(t); t &gt; 0g with transition matrix Q and
initial probability vector (0) 6= [1=2 1=2]. If 6= , then the autocorrelation
function, R( ), is not in Lp[R( )] for any p &gt; 1 (the Lp norm of the
autocorrelation function is in nite). However, as p tends to 1, the Lp - norm of the
autocorrelation function, R( ), approaches a nite constant. Further the
L1norm is equal to one.</p>
      <p>Proof. If 6= and (0) 6= [1=2 1=2], then it follows from (12) that jcj 2
(0; 1). Since R( ) &gt; c and R( ) ! c; ! 1, then R01 jR( )jpd is in nite, that
is, R( ) is not in Lp(R) for any p &gt; 1. However, since jcj &lt; 1, it follows that
jcjp ! 0 if p ! 1. Hence the Lp-norm of the autocorrelation function, R( ),
approaches a nite constant.
}</p>
      <p>In the following discussion, we generalize the above results to CTMCs with
arbitrary state space. It is shown that the existence of an equilibrium probability
distribution ensures that the expression for the autocorrelation function has a
constant part that, under suitable conditions, is not zero.
3</p>
      <p>
        Auto-Correlation Function of Homogeneous Finite
State Space CTMC
We now prove that for any nite state space CTMC, the autocorrelation function
is not integrable and in fact the Lp-norm of the autocorrelation function, R( ),
is in nite for any p &gt; 1. Let the state space of the CTMC fX(t); t &gt; 0g be J =
fC1; : : : ; CN g. Keeping the notation from Section 2, denote by C the diagonal
matrix with vector [C1; : : : ; CN ] as the main diagonal. Then, similarly to (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),
      </p>
      <p>N N
R( ) = X X CiCj P fX(0) = i; X( ) = jg = (0)CeQ C1:
i=1 j=1
(16)</p>
    </sec>
    <sec id="sec-4">
      <title>But, we have that</title>
      <p>N
eQ = X e k Ek;</p>
      <p>
        k=1
where Ek is the residue matrix such that Ek = fk gk with fk being the right
eigenvector of Q and gk being the left eigenvector of Q corresponding to the
eigenvalue k, de ned similarly to (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) and (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), respectively. Let j 1j &gt; j 2j &gt;
&gt; j N j = 0 (the latter equality holds since Q is the generator matrix). Then,
by de nition of the eigenvectors, it follows that gN = while fN = 1, where is
the steady-state probability vector corresponding to Q. Thus, it follows from (16)
that
      </p>
      <p>R( ) = (0)C
"N 1 #</p>
      <p>X e k Ek C1 + (0)C1 C1;
k=1
where, recall, EN = fN gN = 1 . Finally, noting that (0)C1 = EX(0) and
C1 = EX, where X is the steady-state r.v. distributed as , we rewrite (17) as</p>
      <p>
        R( ) = f ( ) + EX(0)EX;
which is consistent with (12). Note that c = EX(0)EX is zero if and only if either
EX(0) or EX is zero (or both). Note also that if (0) = , then c = (EX)2.
Finally, we see from (18) that
(17)
(18)
lim R( ) = EX(0)EX;
!1
which corresponds to (13). This result agrees with the fact that asymptotically
the initial random variable X(0) and the equilibrium random variable X(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
are independent. In fact, one may note that f ( ) is indeed the autocovariance
function of fX(t); t &gt; 0g.
      </p>
      <p>Now, f ( ) is a sum of decaying exponentials. It can be easily veri ed that
f ( ) is integrable. More generally, f ( ) corresponds to a function which is in
Lp(R) for p &gt; 1. But, when c is non-zero, then R( ) is not in Lp(R) for any
p &gt; 1. If c 6= 0, then R jR( )jpd is in nite for every p &gt; 1. Further, if jcj &lt; 1,
then R (R( ))pd approaches zero as p ! 1.
4</p>
      <sec id="sec-4-1">
        <title>Discussion</title>
        <p>From (15), we see that R( ) can be normalized to correspond with a Laplace
density. In turn, the fact that this is a Laplace density has the interpretation
that it corresponds to the density of the di erence between two independent
random variables with identical exponential distributions. Thus, from the point
of view of the autocorrelation function, a unit CTMC corresponds to a Laplace
density.</p>
        <p>Now we discuss the speed of convergence in (18). It follows from (18) that the
speed is governed by the second smallest eigenvalue N 1, which is consistent
with more general results on Markov chains, e.g. [17,18].</p>
        <p>It is well known that the interarrival times of a Poisson process are
exponentially distributed random variables. Also, the sojourn times in every state of a
nite state CTMC are exponentially distributed random variables. This
observation has been explored in [19], for example, to establish that when successive
visits to a state of a CTMC are stitched together, a Poisson process naturally
results. Hence, an arbitrary nite state CTMC can be viewed as a superposition of
point processes. From a practical viewpoint, the superposition of point processes
naturally arises in applications, such as packet streams in packet multiplexers.
Such packet streams have been modelled in [20], for example. Several versatile
point processes have also been studied in [21,22], amongst others. Such
Markovian point processes are actively utilized in queueing theoretic applications.</p>
        <p>One potential application of these results is characterizing how phase
transitions at high levels of a Quasi-Birth-Death (QBD) process are correlated, in
particular how the the autocorrelation of these phase transitions decay. Consider
a QBD process f[Z(t); X(t)]; t &gt; 0g, which is a two-dimensional Markov process,
skip-free (ladder type) on the rst component govered by generator matrix</p>
        <p>
          . . . . . . . . . 5
Let the state space of the second component X(t) be the nite set J . Then, at
the high levels, the (projected) transitions of the component X(t) are governed
by matrix A = A(0) + A(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) + A(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) which itself is a generator matrix. Hence,
considering a Markov process fX(t); t &gt; 0g governed by the matrix A, we may
observe the exponential speed of decay of the autocorrelation R( ), which is
de ned by the second smallest eigenvalue of A.
5
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Conclusion</title>
        <p>We have computed the autocorrelation function of a unit CTMC and the
conditions for integrability (more generally niteness of the Lp-norm) were
established. More generally, the function space structure of arbitrary nite state space
CTMC was explored. Interesting inferences related to point processes (in a
superposition point process) were made based on their relationship to nite state
space Markov chains.
12. B. Ivano , E. Merzbach, M. Plante, A compensator characterization of point
processes on topological lattices, Electronic Journal of Probability 12 (2007) 47{74.
13. W. Whitt, The e ciency of one long run versus independent replications in
steadystate simulation, Management Science 37 (1991) 645{666.
14. A. Klemm, C. Lindemann, M. Lohmann, Modeling ip tra c using the batch
markovian arrival process, Performance Evaluation 54 (2003) 149{173.
15. M. Bladt, B. F. Nielsen, Matrix-Exponential Distributions in Applied Probability,
Vol. 81 of Probability Theory and Stochastic Modelling, Springer US, Boston, MA,
2017. doi:10.1007/978-1-4939-7049-0.</p>
        <p>URL http://link.springer.com/10.1007/978-1-4939-7049-0
16. F. Gantmacher, The Theory of Matrices AMS, Chelsea Publishing: Reprinted by</p>
        <p>American Mathematical Society, 2000.
17. N. Liu, W. J. Stewart, Markov Chains and Spectral Clustering, in: D.
Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor,
O. Nierstrasz, C. Pandu Rangan, B. Ste en, M. Sudan, D. Terzopoulos, D.
Tygar, M. Y. Vardi, G. Weikum, K. A. Hummel, H. Hlavacs, W. Gansterer (Eds.),
Performance Evaluation of Computer and Communication Systems. Milestones
and Future Challenges, Vol. 6821, Springer Berlin Heidelberg, Berlin,
Heidelberg, 2011, pp. 87{98, series Title: Lecture Notes in Computer Science. doi:
10.1007/978-3-642-25575-5\_8.
18. L. Cerda-Alabern, Closed Form Transient Solution of Continuous Time Markov
Chains Through Uniformization, in: Proceedings of the 7th International
Conference on Performance Evaluation Methodologies and Tools, ICST, Torino, Italy,
2014. doi:10.4108/icst.valuetools.2013.254376.
19. E. Cinlar, Introduction to Stochastic Processes, Prentice-Hall, Inc., Englewood</p>
        <p>Cli s, New Jersey, 1975.
20. B. Sriram, W. Whitt, Characterizing superposition arrival processes in packet
multiplexers for voice and data, IEEE Journal on Selected Areas in Communications
4 (1986) 833{846.
21. H. He es, D. Lucantoni, A Markov modulated characterization of packetized voice
and data tra c and related statistical multiplexer performance, IEEE Journal on
Selected Areas in Communications 4 (1986) 856{868.
22. M. Neuts, A versatile Markovian point process, Journal of Applied Probability 16
(1979) 764{779.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>V.</given-names>
            <surname>Balakrishnan</surname>
          </string-name>
          , Stochastic Processes, in: V.
          <string-name>
            <surname>Balakrishnan</surname>
          </string-name>
          (Ed.),
          <source>Mathematical Physics: Applications and Problems</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <volume>461</volume>
          {
          <fpage>493</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -39680-0\_
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>I. Bena</surname>
          </string-name>
          ,
          <article-title>Dichotomous Markov noise: Exact results for out-of-equilibrium systems</article-title>
          ,
          <source>International Journal of Modern Physics B</source>
          <volume>20</volume>
          (
          <issue>20</issue>
          ) (
          <year>2006</year>
          )
          <volume>2825</volume>
          {
          <fpage>2888</fpage>
          , publisher: World Scienti c Publishing Co. doi:
          <volume>10</volume>
          .1142/S0217979206034881.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>B.</given-names>
            <surname>McMillan</surname>
          </string-name>
          ,
          <article-title>History of a problem</article-title>
          ,
          <source>SIAM Journal of Applied Mathematics</source>
          <volume>3</volume>
          (
          <year>1955</year>
          )
          <volume>114</volume>
          {
          <fpage>128</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>L.</given-names>
            <surname>Shepp</surname>
          </string-name>
          ,
          <article-title>Covariance of unit processes</article-title>
          , in: Working Conference on Stochastic Processes, Santa Barbara, CA,
          <year>1967</year>
          , pp.
          <volume>205</volume>
          {
          <fpage>218</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>E.</given-names>
            <surname>Masry</surname>
          </string-name>
          ,
          <article-title>On covariance functions of unit processes</article-title>
          ,
          <source>SIAM Journal of Applied Mathematics</source>
          <volume>23</volume>
          (
          <year>1972</year>
          )
          <volume>28</volume>
          {
          <fpage>33</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>L.</given-names>
            <surname>Debowski</surname>
          </string-name>
          ,
          <article-title>On processes with hyperbolically decaying autocorrelations</article-title>
          ,
          <source>Journal of Time Series Analysis</source>
          <volume>32</volume>
          (
          <year>2011</year>
          )
          <volume>580</volume>
          {
          <fpage>584</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>S.</given-names>
            <surname>Degerine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lambert-Lacroix</surname>
          </string-name>
          ,
          <article-title>Characterization of the partial autocorrelation function of nonstationary time series</article-title>
          ,
          <source>Journal of Multivariate Analysis</source>
          <volume>87</volume>
          (
          <year>2003</year>
          )
          <volume>46</volume>
          {
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>A.</given-names>
            <surname>Inoue</surname>
          </string-name>
          ,
          <article-title>AR and MA representation of partial autocorrelation functions, with applications</article-title>
          ,
          <source>Probability Theory and Related Fields</source>
          <volume>140</volume>
          (
          <year>2008</year>
          )
          <volume>523</volume>
          {
          <fpage>551</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Eun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <article-title>Frequency hopping sequences with optimal partial autocorrelation properties</article-title>
          ,
          <source>IEEE Transactions on Information Theory</source>
          <volume>50</volume>
          (
          <year>2004</year>
          )
          <volume>2438</volume>
          {
          <fpage>2442</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. O. Haggstrom,
          <string-name>
            <surname>M. Van Lieshout</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>M ller, Characterization results and markov chain monte carlo algorithms including exact simulation for some spatial point processes</article-title>
          ,
          <source>Bernoulli</source>
          <volume>5</volume>
          (
          <year>1999</year>
          )
          <volume>641</volume>
          {
          <fpage>658</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. W. Huang,
          <article-title>On the characterization of point processes with the exchangeable and markov properties</article-title>
          ,
          <source>Sankyha: The Indian Journal of Statistics</source>
          , Series A (
          <year>1990</year>
          )
          <volume>16</volume>
          {
          <fpage>27</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>