=Paper=
{{Paper
|id=Vol-2792/short2
|storemode=property
|title=Autocorrelation Function Characterization of Continuous Time Markov Chains (short paper)
|pdfUrl=https://ceur-ws.org/Vol-2792/short2.pdf
|volume=Vol-2792
|authors=Garimella Rama Murthy,Douglas G. Down,Alexander Rumyantsev
}}
==Autocorrelation Function Characterization of Continuous Time Markov Chains (short paper)==
Autocorrelation Function Characterization of
Continuous Time Markov Chains?
G. Rama Murthy1 , D.G. Down2 , and A. Rumyantsev3,4
1
Department of Computer Science and Engineering,
Ecole Centrale School of Engineering, Mahindra University, Bahadurpally,
Hyderabad, India rama.murthy@mechyd.ac.in
2
Department of Computing and Software, McMaster University, Hamilton, Canada
downd@mcmaster.ca
3
Institute of Applied Mathematical Research, Karelian Research Centre of the
Russian Academy of Sciences, Petrozavodsk, Russia
4
Petrozavodsk State Universisty, Petrozavodsk, Russia ar0@krc.karelia.ru
Abstract. We study certain properties of the function space of auto-
correlation functions of unit, as well as finite state space Continuous
Time Markov Chains (CTMCs). It is shown that under particular con-
ditions, the Lp norm of the autocorrelation function of arbitrary finite
state space CTMCs is infinite. Several interesting inferences are made
for point processes associated with CTMCs.
Keywords: Unit Continuous Time Markov Chains · Autocorrelation
Function · Integrability Conditions
1 Introduction
Many natural and artificial phenomena are endowed with non-deterministic dy-
namic behavior. Stochastic processes are utilized to model such dynamic phe-
nomena. There are a number of applications, e.g. in physics [1,2], where the
stochastic model can be in only one of two states, modeled by the so-called di-
chotomous stochastic process (e.g., dichotomous Markov noise). In particular,
the notion of a unit random process {X(t), t > 0}, i.e., a random process whose
state space consists of two values, {−1, 1}, arises naturally in many applications
e.g. bit transmission in communications systems, or detection theory. In the lat-
ter case, some random process, {Y (t), t > 0}, is provided as input to a threshold
detector, where the output X(t) is the sign of Y (t), i.e. X(t) = sign(Y (t)). Thus
{X(t), t > 0} is a unit process and can be shown to be Markovian under some
conditions on Y (t). More generally, quantization of a general random process
?
The first author is supported by internal funding for research and development from
Mahindra École Centrale School of Engineering, Mahindra University, Hyderabad
(for the year 2018-19). The second author is supported by the Natural Sciences and
Engineering Research Council of Canada under the Discovery Grant program. The
third author is partially supported by RFBR, projects 19-57-45022, 19-07-00303.
Copyright © 2020 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
leads to a finite-state random process, which is often Markovian, so the study of
more general finite-state processes is also of interest.
The characterization of the autocorrelation function,
R(t, t + τ ) = E[X(t)X(t + τ )], τ, t > 0,
of a unit process {X(t), t > 0}, is considered an important problem [3]. Several
interesting properties of such autocorrelation functions are studied in [4,5].
Wide sense stationary (or even strictly stationary) random processes natu-
rally arise as stochastic models in a variety of applications. They also arise in
time series models (AR, ARMA processes) of natural and artificial phenomena.
The autocorrelation function of a wide sense stationary process does not depend
on t, that is,
R(τ ) := R(t, t + τ ) = E[X(0)X(τ )].
In many interesting models, the autocorrelation function, R(τ ) is integrable and
hence the power spectral density (the Fourier transform of R(τ )) exists.
With this in mind, we are motivated to study the function space of finite
state Markov processes. Masry [5] has studied the functional space properties of
stationary unit random processes. However, the study of the integrability of R(τ )
was not undertaken. To the best of our knowledge, the Lp -norm of R(τ ) for finite
state Continuous Time Markov Chains (CTMCs) has not been investigated. In
this paper, we determine conditions under which the autocorrelation function is
not integrable, and by extension conditions under which the Lp -norm of R(τ )
approaches zero as p → ∞.
To put our work into context, there is related work in three directions: the
characterization of autocorrelation functions of random processes, the charac-
terization of point processes, and the use of autocorrelation properties in the
analysis of stochastic models, in particular the analysis of queues. For the char-
acterization of autocorrelation functions, we point the reader to work in time
series analysis [6,7,8] and in telecommunications [9]. Point process characteriza-
tion has been studied in [10,11,12]. Properties of autocorrelation functions have
been employed to determine appropriate simulation strategies for queues [13]
and is a feature of modelling arrival traffic to queues, using Markovian Arrival
Processes, see [14], for example.
This paper is organized as follows. In Section 2, the autocorrelation function
of a unit CTMC is computed and the structure of the function space is studied. In
Section 3, the autocorrelation function of a finite state space CTMC is computed
and the finiteness of its Lp norm is discussed. It is shown that under some
conditions, the autocorrelation function is not integrable. In Section 4, various
interesting inferences are made for point processes. Finally, the paper concludes
in Section 5.
2 Auto-Correlation Function of Homogeneous Unit
CTMC: Integrability
In this section we consider a homogeneous Continuous Time Markov Chain
(CTMC) {X(t), t > 0} with the state space J = {−1, 1} and generator ma-
trix
−α α
Q= , α, β > 0.
β −β
We assume that the resulting stochastic process is wide sense stationary (and
not necessarily strictly stationary). Although results on R(τ ) can be found in
the literature [1], it is instructive to show the evaluation of R(τ ) with the help
of spectral representation.
Since {X(t), t > 0} is a unit random process,
R(τ ) = P {X(0) = X(τ )} − P {X(0) 6= X(τ )} = 2P {X(0) = X(τ )} − 1. (1)
It remains to compute P {X(0) = X(τ )}. Note that
X
P {X(τ ) = X(0)} = P (X(τ ) = j|X(0) = j)P (X(0) = j). (2)
j∈J
The conditional probabilities in (2) are computed using the transient probability
distribution π̄(τ ) of X(τ ) at time τ > 0, whereas the values P (X(0) = j), j ∈ J,
are the components of the initial probability distribution, π̄(0). For a homoge-
neous CTMC with finite state space,
π̄(τ ) = π̄(0)eQτ , (3)
P∞
where eQτ = i=0 Qi τ i /i! is the matrix exponential (see e.g. [15]). Using the
Jordan canonical form [16], eQτ is computed below.
Since Q is a rank one matrix, the eigenvalues are λ1 = −(α + β), λ2 = 0.
Denote the corresponding right eigenvectors by {ḡ1 , ḡ2 } (column vectors) as the
solutions of
Qḡi = λi ḡi , i = 1, 2. (4)
Let the left eigenvectors (row vectors) {f¯1 , f¯2 } be the solutions of
f¯i Q = λi f¯i , i = 1, 2. (5)
Compose columnwise the matrix G of right eigenvectors, and let F contain the
left eigenvectors as rows. Then since Q is diagonalizable,
−(α + β) 0
Q=G F,
0 0
where also we have GF = I. Hence it follows that
eQτ = e−(α+β)τ ḡ1 f¯1 + ḡ2 f¯2 . (6)
Since ḡ1 is unique up to multiplicative constant, it follows from (4) that
1
ḡ1 = −β .
α
At the same time, since λ2 = 0, (4) is the condition for Q to be a generator
matrix, that is,
ḡ2 = 1,
where 1 is the (column) vector of ones. The left eigenvectors are obtained after
some algebra from F G = I as follows:
h i
f¯1 = α+β α ¯ β
α α
− α+β , f2 = α+β α+β .
It is interesting to note that since λ2 = 0, it follows from (5) that the second
left eigenvector, f¯2 , is indeed the steady-state probability vector π̄ = π̄(∞) of
the process {X(t), t > 0}, that is, the stochastic vector solving π̄Q = 0, i.e.
β α
π̄ = f¯2 = . (7)
α+β α+β
Thus, it follows from (6) that
e−(α+β)τ
eQτ = Π − Q , (8)
α+β
where
Π = 1π̄ (9)
is the matrix that contains the steady-state vector π̄ in its rows. Interestingly,
ergodicity is observed from (8) in the limit,
Π = lim eQτ .
τ →∞
Noting from (9) that π̄(0)Π = π̄, from (3) and (8) we have
e−(α+β)τ
π̄(τ ) = π̄ − π̄(0)Q . (10)
α+β
Equation (10) demonstrates the exponential speed of convergence of π̄(τ ) to
equilibrium π̄, given in (7), as τ → ∞. Finally, using (10) in (2), we obtain
e−(α+β)τ
P {X(τ ) = X(0)} = π̄(0)π̄ T − π̄(0)q̄,
α+β
where q̄ is the negative (column) vector of diagonal elements of Q. Recalling (1),
2
denoting c = 2π̄(0)π̄ T − 1 and d = α+β π̄(0)q̄, an explicit expression for R(τ ) is
in turn
R(τ ) = c − de−(α+β)τ . (11)
It remains to note that since q̄ is negative componentwise, d is also negative, and
thus R(τ ) > c, while R(0) = 1. It is rather straighforward to check that in fact
c = 2π̄(0)π̄ T − 1 = EX(0)EX, (12)
where EX = π̄[−1 1]T is the steady-state mean value of the process. Thus, the
ergodicity result follows from (11):
lim R(τ ) = EX(0)EX. (13)
τ →∞
We now turn to some interesting special cases.
Equilibrium case: assume π̄(0) = π̄. In this case π̄(τ ) = π̄, and the coefficients
in (11) are
2
α−β 4αβ
c = 2π̄π̄ T − 1 = , d=− ,
α+β (α + β)2
which allows us to write
2
α−β 4αβ
R(τ ) = + e−(α+β)τ . (14)
α+β (α + β)2
It can be seen from (14) that R(0) = 1 and then the autocorrelation is mono-
tonically non-increasing until finally R(∞) = c. As expected from (12), in this
case c = (EX(0))2 .
Further, in the symmetric case α = β, from (14) we have
R(τ ) = e−2ατ . (15)
Uniform initial probability: Let now π̄(0) = [1/2 1/2]. In such a case, in (11),
the constant c = 0 and d = 1, thus
R(τ ) = e−(α+β)τ ,
which again gives (15) if α = β.
We conclude the section with a lemma that presents one possible charac-
terization of the function space of autocorrelation functions of a unit CTMC.
Lemma 1. Consider a unit CTMC {X(t), t > 0} with transition matrix Q and
initial probability vector π̄(0) 6= [1/2 1/2]. If α 6= β, then the autocorrelation
function, R(τ ), is not in Lp [R(τ )] for any p > 1 (the Lp norm of the autocor-
relation function is infinite). However, as p tends to ∞, the Lp - norm of the
autocorrelation function, R(τ ), approaches a finite constant. Further the L∞ -
norm is equal to one.
Proof. If α 6= β and π̄(0) 6= [1/2 1/2], then itRfollows from (12) that |c| ∈
∞
(0, 1). Since R(τ ) > c and R(τ ) → c, τ → ∞, then 0 |R(τ )|p dτ is infinite, that
p
is, R(τ ) is not in L (R) for any p > 1. However, since |c| < 1, it follows that
|c|p → 0 if p → ∞. Hence the Lp -norm of the autocorrelation function, R(τ ),
approaches a finite constant. ♦
In the following discussion, we generalize the above results to CTMCs with
arbitrary state space. It is shown that the existence of an equilibrium probability
distribution ensures that the expression for the autocorrelation function has a
constant part that, under suitable conditions, is not zero.
3 Auto-Correlation Function of Homogeneous Finite
State Space CTMC
We now prove that for any finite state space CTMC, the autocorrelation function
is not integrable and in fact the Lp -norm of the autocorrelation function, R(τ ),
is infinite for any p > 1. Let the state space of the CTMC {X(t), t > 0} be J =
{C1 , . . . , CN }. Keeping the notation from Section 2, denote by C the diagonal
matrix with vector [C1 , . . . , CN ] as the main diagonal. Then, similarly to (2),
N X
X N
R(τ ) = Ci Cj P {X(0) = i, X(τ ) = j} = π̄(0)CeQτ C1. (16)
i=1 j=1
But, we have that
N
X
eQτ = eλk τ Ek ,
k=1
where Ek is the residue matrix such that Ek = f¯k ḡk with f¯k being the right
eigenvector of Q and ḡk being the left eigenvector of Q corresponding to the
eigenvalue λk , defined similarly to (4) and (5), respectively. Let |λ1 | > |λ2 | >
· · · > |λN | = 0 (the latter equality holds since Q is the generator matrix). Then,
by definition of the eigenvectors, it follows that ḡN = π̄ while f¯N = 1, where π̄ is
the steady-state probability vector corresponding to Q. Thus, it follows from (16)
that "N −1 #
X
R(τ ) = π̄(0)C eλk τ Ek C1 + π̄(0)C1π̄C1, (17)
k=1
where, recall, EN = f¯N ḡN = 1π̄. Finally, noting that π̄(0)C1 = EX(0) and
π̄C1 = EX, where X is the steady-state r.v. distributed as π, we rewrite (17) as
R(τ ) = f (τ ) + EX(0)EX, (18)
which is consistent with (12). Note that c = EX(0)EX is zero if and only if either
EX(0) or EX is zero (or both). Note also that if π̄(0) = π̄, then c = (EX)2 .
Finally, we see from (18) that
lim R(τ ) = EX(0)EX,
τ →∞
which corresponds to (13). This result agrees with the fact that asymptotically
the initial random variable X(0) and the equilibrium random variable X(∞)
are independent. In fact, one may note that f (τ ) is indeed the autocovariance
function of {X(t), t > 0}.
Now, f (τ ) is a sum of decaying exponentials. It can be easily verified that
f (τ ) is integrable. More generally, f (τ ) corresponds to a function which is in
Lp (R) for p > 1. But,R when c is non-zero, then R(τ ) is not in Lp (R) for any
p > 1.R If c 6= 0, then |R(τ )|p dτ is infinite for every p > 1. Further, if |c| < 1,
then (R(τ ))p dτ approaches zero as p → ∞.
4 Discussion
From (15), we see that R(τ ) can be normalized to correspond with a Laplace
density. In turn, the fact that this is a Laplace density has the interpretation
that it corresponds to the density of the difference between two independent
random variables with identical exponential distributions. Thus, from the point
of view of the autocorrelation function, a unit CTMC corresponds to a Laplace
density.
Now we discuss the speed of convergence in (18). It follows from (18) that the
speed is governed by the second smallest eigenvalue λN −1 , which is consistent
with more general results on Markov chains, e.g. [17,18].
It is well known that the interarrival times of a Poisson process are exponen-
tially distributed random variables. Also, the sojourn times in every state of a
finite state CTMC are exponentially distributed random variables. This obser-
vation has been explored in [19], for example, to establish that when successive
visits to a state of a CTMC are stitched together, a Poisson process naturally re-
sults. Hence, an arbitrary finite state CTMC can be viewed as a superposition of
point processes. From a practical viewpoint, the superposition of point processes
naturally arises in applications, such as packet streams in packet multiplexers.
Such packet streams have been modelled in [20], for example. Several versatile
point processes have also been studied in [21,22], amongst others. Such Marko-
vian point processes are actively utilized in queueing theoretic applications.
One potential application of these results is characterizing how phase tran-
sitions at high levels of a Quasi-Birth-Death (QBD) process are correlated, in
particular how the the autocorrelation of these phase transitions decay. Consider
a QBD process {[Z(t), X(t)], t > 0}, which is a two-dimensional Markov process,
skip-free (ladder type) on the first component govered by generator matrix
A0,0 A0,1 0
0 ...
A1,0 A(1) A(0) 0 . . .
0 A(2) A(1) A(0) . . .
.
(1) . . .
0 (2)
0 A A
.. .. ..
0 0 . . .
Let the state space of the second component X(t) be the finite set J. Then, at
the high levels, the (projected) transitions of the component X(t) are governed
by matrix A = A(0) + A(1) + A(2) which itself is a generator matrix. Hence,
considering a Markov process {X(t), t > 0} governed by the matrix A, we may
observe the exponential speed of decay of the autocorrelation R(τ ), which is
defined by the second smallest eigenvalue of A.
5 Conclusion
We have computed the autocorrelation function of a unit CTMC and the con-
ditions for integrability (more generally finiteness of the Lp -norm) were estab-
lished. More generally, the function space structure of arbitrary finite state space
CTMC was explored. Interesting inferences related to point processes (in a su-
perposition point process) were made based on their relationship to finite state
space Markov chains.
References
1. V. Balakrishnan, Stochastic Processes, in: V. Balakrishnan (Ed.), Mathematical
Physics: Applications and Problems, Springer International Publishing, Cham,
2020, pp. 461–493. doi:10.1007/978-3-030-39680-0\_21.
2. I. Bena, Dichotomous Markov noise: Exact results for out-of-equilibrium systems,
International Journal of Modern Physics B 20 (20) (2006) 2825–2888, publisher:
World Scientific Publishing Co. doi:10.1142/S0217979206034881.
3. B. McMillan, History of a problem, SIAM Journal of Applied Mathematics 3 (1955)
114–128.
4. L. Shepp, Covariance of unit processes, in: Working Conference on Stochastic Pro-
cesses, Santa Barbara, CA, 1967, pp. 205–218.
5. E. Masry, On covariance functions of unit processes, SIAM Journal of Applied
Mathematics 23 (1972) 28–33.
6. L. Dȩbowski, On processes with hyperbolically decaying autocorrelations, Journal
of Time Series Analysis 32 (2011) 580–584.
7. S. Dégerine, S. Lambert-Lacroix, Characterization of the partial autocorrelation
function of nonstationary time series, Journal of Multivariate Analysis 87 (2003)
46–59.
8. A. Inoue, AR and MA representation of partial autocorrelation functions, with
applications, Probability Theory and Related Fields 140 (2008) 523–551.
9. Y. Eun, S. Jin, Y. Hong, H. Song, Frequency hopping sequences with optimal
partial autocorrelation properties, IEEE Transactions on Information Theory 50
(2004) 2438–2442.
10. O. Häggström, M. Van Lieshout, J. Møller, Characterization results and markov
chain monte carlo algorithms including exact simulation for some spatial point
processes, Bernoulli 5 (1999) 641–658.
11. W. Huang, On the characterization of point processes with the exchangeable and
markov properties, Sankyhā: The Indian Journal of Statistics, Series A (1990) 16–
27.
12. B. Ivanoff, E. Merzbach, M. Plante, A compensator characterization of point pro-
cesses on topological lattices, Electronic Journal of Probability 12 (2007) 47–74.
13. W. Whitt, The efficiency of one long run versus independent replications in steady-
state simulation, Management Science 37 (1991) 645–666.
14. A. Klemm, C. Lindemann, M. Lohmann, Modeling ip traffic using the batch marko-
vian arrival process, Performance Evaluation 54 (2003) 149–173.
15. M. Bladt, B. F. Nielsen, Matrix-Exponential Distributions in Applied Probability,
Vol. 81 of Probability Theory and Stochastic Modelling, Springer US, Boston, MA,
2017. doi:10.1007/978-1-4939-7049-0.
URL http://link.springer.com/10.1007/978-1-4939-7049-0
16. F. Gantmacher, The Theory of Matrices AMS, Chelsea Publishing: Reprinted by
American Mathematical Society, 2000.
17. N. Liu, W. J. Stewart, Markov Chains and Spectral Clustering, in: D. Hutchi-
son, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor,
O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Ty-
gar, M. Y. Vardi, G. Weikum, K. A. Hummel, H. Hlavacs, W. Gansterer (Eds.),
Performance Evaluation of Computer and Communication Systems. Milestones
and Future Challenges, Vol. 6821, Springer Berlin Heidelberg, Berlin, Heidel-
berg, 2011, pp. 87–98, series Title: Lecture Notes in Computer Science. doi:
10.1007/978-3-642-25575-5\_8.
18. L. Cerdà-Alabern, Closed Form Transient Solution of Continuous Time Markov
Chains Through Uniformization, in: Proceedings of the 7th International Confer-
ence on Performance Evaluation Methodologies and Tools, ICST, Torino, Italy,
2014. doi:10.4108/icst.valuetools.2013.254376.
19. E. Çinlar, Introduction to Stochastic Processes, Prentice-Hall, Inc., Englewood
Cliffs, New Jersey, 1975.
20. B. Sriram, W. Whitt, Characterizing superposition arrival processes in packet mul-
tiplexers for voice and data, IEEE Journal on Selected Areas in Communications
4 (1986) 833–846.
21. H. Heffes, D. Lucantoni, A Markov modulated characterization of packetized voice
and data traffic and related statistical multiplexer performance, IEEE Journal on
Selected Areas in Communications 4 (1986) 856–868.
22. M. Neuts, A versatile Markovian point process, Journal of Applied Probability 16
(1979) 764–779.