<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Randomness: Old and New Ideas</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmitriy Klyushin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrs'ka str. 64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The survey of classical theories of randomness is provided, and an alternative model is proposed. Analysis of the classical models demonstrates that despite their mathematical rigor they are hardly useful in practice. The new model is based on the lattice theory, has the strong mathematical basis and easily used in practice. .</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Frequency approach (von Mises)</title>
      <p>
        The model proposed by von Mises uses a concept of an infinite binary sequence x1, x2 ,...
(collective) that meets the following conditions: 1) if hn is the relative frequency of units in first n
boundary of the sequences of relative frequencies is equal to 1/2, but hn 
the possible rules for selecting subsequences by any fixed countable set of functions and showed that
such collectives exist. Church clarified that this set must be a set of recursive functions. Thus, the
theory received due mathematical rigor. Such sequences are called random Mises–Wald–Church
sequences. In 1939, a new counterexample was put forward against this theory. The construction of
Ville [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] demonstrated the existence of random sequences of Mises–Wald–Church, in which the
1
      </p>
      <p>
        for all n. Van Lambalgen
2
[
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9–12</xref>
        ] carefully conducted an analysis of Ville counterexamples and other objections to von Mises
theory. In particular, van Lambalgen distinguishes three main objections to von Mises theory from
Frechet [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and Ville [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: 1) von Mises theory is weaker than Kolmogorov theory [
        <xref ref-type="bibr" rid="ref14">14, 15</xref>
        ] because it
does not follow the law of the repeated logarithm; 2) collectives do not always satisfy all asymptotic
properties arising in the methods of measure theory (therefore they can not serve as satisfactory
models for real phenomena), and 3) the von Mises formalization for game strategies using the rule of
allowable choice of elements has disadvantages because there is possibility to win an unlimited sum.
      </p>
      <p>
        The answer to the first objection put forward by Ville comes down to the fact that von Mises
theory is purely frequency and does not provide the operation of passing to the limit in as in the
measure theory [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In other words, von Mises model and Kolmogorov model are not equivalent. But
the fact that they are different cannot be considered as a disadvantage of any of these models [16].
      </p>
      <p>The second objection belonging to Frechet can be divided into two parts: 1) collectives cannot be
satisfactory models for random phenomena, because one-sided convergence, which allows refuting
Ville counterargument, is not observed in practice; 2) collectives do not satisfy the asymptotic laws
arising from the theory of measure. Van Lambalgen refuted these objections, pointing out that in
practice there are only finite sequences and collectives were invented precisely to describe their
properties, and von Mises did not set himself the goal of describing infinite random phenomena. Note
that this statement is contradictory, because von Mises axioms include the limit of an infinite
sequence of relative frequencies. The second part of the counterargument is somewhat reminiscent of
Ville first counterargument and it is similarly refuted: the fact that collectives do not satisfy the
asymptotic laws arising from the theory of measure indicates only a fundamental difference between
these models, but is not their shortcomings.</p>
      <p>The third objection refers to the existence of a strategy invented by Ville, which allows the player
to win an infinite amount of money in the endless continuation of the game with the coin. In other
words, there is a collective describing a coin game in which a player wins an unlimited amount,
although by definition collectives deny this possibility. However, as van Lambalgen notes, the notions
of fair play according to Ville and von Mises are different, so this counterargument is not valid.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Randomness as computational complexity (Kolmogorov)</title>
      <p>Concluding that von Mises–Wald–Church theory is too fuzzy, Kolmogorov improved it by
proposing a new class of algorithms for selecting valid sequences.</p>
      <p>Kolmogorov complexity of a sequence x   x1, x2 ,..., xN  , or is algorithmic entropy, is the length
K  x of its shortest description, constructed using a Turing machine. If there is some additional
information y , then we can consider conditional Kolmogorov complexity K  x y . A sequence is
called Bernoullian by Kolmogorov if its complexity is close to log2 CNk , i.e. K  x N, k   log2 CNk .
Also, Kolmogorov introduced the notions of chaotic sequence that satisfies the condition
K  x N,k   log2 CNk  m . If a set A contains finite number of elements x1, x2 ,..., xN , then the
complexity of its element is less or equal to log2 N . An element x of A is random if its complexity is
close to maximal, i.e. K (x A)  log2 N . The difference log2 N  K(x A) is a defect of randomness of
an element x.</p>
      <p>In the class of sequences random according to Kolmogorov, frequency stability is observed in all
acceptable Kolmogorov subsequences. Thus, the class of sequences random according to Kolmogorov
is a partial case of the class of random sequences von Mises-Wald-Church. But, according to
Uspensky [17], today von Mises theory remains an incomplete imprint of the intuitive notion of
chance. Its main feature is the insistence on the frequency stability of random sequences. The
contribution of Kolmogorov in the development of the theory of chance is quite fully covered in the
work of Vovk and Shafer [18].</p>
      <p>Wolf and Shafer believe that the Kolmogorov theory of randomness is based on the principle of
frequency stability of von Mises and on the principle of Cournot [19]. The von Mises principle states
that the relative frequency of infinite sequence of outcomes of random tests has a limit, and the
Cournot principle states that a very unlikely event in a single test will not occur. Accordingly, the
works of Kolmogorov on this topic can be divided into two categories: those based on the principle of
von Mises (1963-1965), and those based on the principle of Cournot (1965-1987).</p>
      <p>In [15] Kolmogorov formulated two main shortcomings of von Mises theory: 1) a frequency
approach that appeals to the concept of limit frequency that cannot have practical application, because
in real applications researchers are dealing with finite sequences; 2) frequency approach can not be
developed purely mathematically.</p>
      <p>In 1965, Kolmogorov began to develop the theory of algorithmic randomness [20–22]. Within this
theory, he introduced the concept of the Bernoullian sequence: a binary sequence  x1, x2 ,..., xN 
consisting of k ones and N  k zeros, to describe which requires at least log2 CNk bits. Therefore, the
problem of randomness was reduced to the choice of a certain way of describing the sequences. The
main invention that ensured the success of the proposed theory was a universal method of description,
which generates descriptions that are shorter or slightly longer than the descriptions created by
alternative methods. Regardless of Kolmogorov analogous methods were invented by Solomonoff
[23–25] and Chaitin [26].</p>
      <p>Kolmogorov proposed to consider an infinite binary sequence as random if there is a constant such
that for all the entropy of the initial segment of the sequence exceeds.</p>
      <p>Definition 2 (Kolmogorov). An infinite binary sequence x1, x2 ,..., xn ,... is said to be random if
there is a constant for an arbitrary natural number that satisfies the inequality K  x1, x2 ,..., xn   n  c .</p>
      <p>For a long time, the focus of Kolmogorov and his followers (Asarin [27, 28], Shen [29], Vyugin
[30, 31] etc.) were finite sequences. Their research was aimed at the consistent removal from
consideration of statistical models based on the concept of probability, and their replacement by
models based on the concept of complexity.</p>
      <p>Vovk and Schafer [18] note the following characteristic features of the theory of complexity
proposed by Kolmogorov and his followers: 1) It considers only finite sequences and finite sets of
constructive objects; 2) it is based on the assumption that an event that has a very low probability will
not occur.</p>
      <p>Thus, developing the theory of complexity, Kolmogorov abandoned the von Mises principle and
based it on the Cournot principle. It should be noted that the complexity theory remains the subject of
intensive theoretical research. In 2007 for a series of works "On clarification of estimates AN
Kolmogorov relating to the theory of chance" Muchnik and Semenov [32] was awarded the prize
named after Kolmogorov. In these works important results were obtained in the field of combinatorial
theory of probabilities and the theory of frequency tests of randomness. Muchnik and Semenov
proved that the lower estimate by Kolmogorov, which characterizes the maximum number of
admissible selection rules, for which there is guaranteed to be a generator of random numbers, is
accurate in order and even asymptotically accurate. Kolmogorov put it back in 1963, when work on
complexity theory was just beginning.</p>
      <p>An original approach to estimating the complexity of finite sequences of zeros and ones was
proposed by Arnold [33]. The value of this work lies in the fact that it uses the ideas of various fields:
computational mathematics, topology, graph theory, algebra. Despite the fact that a complete solution
of the problem is not obtained in the work, the combination of methods of different branches of
mathematics seems to be the most fruitful and promising approach.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Randomness as typicality (Martin-Löf)</title>
      <p>
        Attempts to extend the theory of Kolmogorov complexity to infinite sequences have encountered
the problem of oscillation of complexity. Consider a fixed finite binary sequence  x1, x2 ,..., xn  . Do
inequalities K  x1, x2 ,..., xm   K  x1, x2 ,..., xn  or K  x1, x2 ,..., xm m  K  x1, x2 ,..., xn n hold for all
infinite binary sequences x and m  n ? As stated in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the answers to both questions are negative.
As Vitanyi [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] points out, even for sequences of high complexity that satisfy the inequality
K  x1, x2 ,..., xn   n  log2 n  2log2 log2 n for all n , the value n  K  x1, x2 ,..., xn  log2 n varies from
0 to 1.
      </p>
      <p>This problem was solved in 1966, when Martin-Löf [34] concluded that the randomness defect of
an element of a finite set can be considered as a universal statistical test, and extended it to infinite
sequences using constructive measure theory. In this case, Martin-Löf assumed that the random object
is typical, i.e. belongs to the vast majority. Martin-Löf definition looks like this.</p>
      <p>Definition 3 (Martin-Löf). An infinite binary sequence x1, x2 ,..., xn ,... is said to be random with
respect to uniform measure, if for an arbitrary natural n there is a constant c such that
K  x1, x2 ,..., xn   n  c .</p>
      <p>Obviously, a sequence that is random by Martin-Löf is random by von Mises. From the other side,
there are sequences that do not satisfy the conditions of Martin-Löf. A uniform measure of a set of
sequences for which there is a constant c and an infinite number of numbers n , such
K  x1, x2 ,..., xn   n  c is equal to one. Therefore, the uniform measure of the set of random
sequences that do not satisfy the condition of Martin- Löf definition is equal to zero.</p>
      <p>Independently of Martin-Löf and each other, Schnorr [35] and Levin [36, 37] worked on the
problem of infinite binary random sequences. They showed that an infinite binary sequence is random
according to Martin-Löf if and only if the randomness defect of its initial segments is of limited value,
i.e. KM  x1, x2 ,..., xn   n  O1 where KM  x1, x2 ,..., xn  is the monotonic entropy.</p>
      <p>It is obvious that the sequence, random according to Martin-Löf, is also random according to
Mises-Wald-Church. On the other hand, as Wald construction demonstrates, there are Mises–Wald–
Church collectives in which the relative frequency of the unit goes to 1/2 and
K  x1, x2 ,..., xn   O f nlog2 n for any unlimited, non-decreasing, totally recursive function. Such
sequences do not satisfy the conditions for the Martin-Löf.</p>
      <p>As we can see, the models described above are purely theoretical and their application in practice
is associated with great difficulties. We offer an alternative approach, which is both strictly
mathematically grounded and easily implemented in practical applications.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Alternative model (Petunin–Klyushin)</title>
      <p>Consider the trial T with two outcomes A and A. Introduce the indicator xk such that xk  1 if in
kth repetition of T we observe A and 0 otherwise. The sequence of bits x1, x2 ,... is said to be
relative frequency of A under the n repetitions of T .
5.1.</p>
    </sec>
    <sec id="sec-6">
      <title>Basics of alternative approach</title>
      <p>The key issue of this approach is that for correct definition of the randomness we must consider an
infinite sequence of the results of series X1, X 2 ,... . For convenience, let arrange these series in an

indefinite characteristic matrix (T )  xij i, j1 . Denote rows of T  as Xi   xi1, xi2 ,..., xin ,... and
columns as X *j   x1 j , x2 j ,..., xnj ,... . Every row X n and every column Xn* of the matrix T  can be
considered as a binary representation of real numbers  n  0, xn1xn2...xnn... and n*  0, x1n x2n...xnn... in
0,1 respectively. These numbers form the sets M and M * , respectively.</p>
      <p>Definition 4 (Petunin–Klyushin). A trial T is said to be random if 1) every row X n and column
Xn* ( n 1,2,... ) of T  is a Bernoullian sequence of the same order p 0,1; 2) M and M * are
dense in 0,1 . A random experiment E is an infinite series of trials T . A random event RE is an
outcome of T occurring in a random experiment E . The probability pE  A of RE is the order
p 0,1 of the Bernoullian sequences of the outcomes generated in E.</p>
      <p>In practice, we work with finite matrices. Thus, we propose the following useful clarification: T
is said to be a random trial if 1) every row X i and column Xi* ( i 1,2,...n ) of finite matrix n T 
are segments of Bernoullian sequences of the same order p 0,1 ; and 2) if for an arbitrary   0
there exists such n that the sets M n and Mn* generated by columns and rows of the finite
characteristic matrix n (T )  xij in, j1 form -net in 0,1 .</p>
      <p>Theorem 1. The probability that the sets M and M * generated of rows and columns of the
characteristic matrix T  in the Bernoullian experiment are dense is equal to 1.</p>
      <p>
        Proof. Consider the case M . Take a binary presentation   0,i1i2...in... of an arbitrary real
number from [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] . Choose an arbitrary   0 and a natural n0 , such that 21n0  2 . Put
  0,0...0in0 1in0 2... and ˆ    0,i1i2...in0 0...0... Let A be a random event that after n0
independent repetitions of the random trial T we obtain a set i1,i2 , ...,in0 . Suppose that in this set
units occur k times, and zeros occur n0  k times. Denote by TA a random trial with outcomes A and
A. Then, p T,1T2 ,...,Tn0 , A  pk 1  pn0 k  0 and p T1,T2 ,...,Tn0 , A  1  p T1,T2 ,...,Tn0 , A    1.
Let us prove, that the probability of at least one event A in a sequence of independent Bernoulli trials
equals to 1. Indeed, the probability of A is less than  n for any n . Thus, it equals to zero.
Therefore, the probability that there exists a row of T  such that its first n0 elements are
i1,i2 , ...,in0 is equal to 1. This row is a binary presentation of a number  in M , such that    
2
Let  ,   0,1 be an arbitrary interval. Put     . Then the probability that there exists some
2
number   M such that     equals to 1, thus  [ , ] . Therefore, in the Bernoulli model
2
E the number set M represented by the rows   E  is dense in 0,1 with probability 1. The proof
of the theorem for M * is similar.
5.2.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Field of events</title>
      <p>Let T be a random trial, and S T  be a set of all outcomes of T . Define addition and
multiplication on members of S T  , and negation of an event. Then, we shall be able to determine
the probability pE  A of an event A in S T  transforming it into a field of events S  E  . Introduce
partial order in S  E  generated by the random experiment E : an event A implies an event B , i.e.
A  B , if the occurrence of A in E implies the occurrence of B . Therefore, S  E  becomes a
partially ordered set where A  B  sup( A, B)  A  B and AB  inf( A, B)  A  B . Addition and
multiplication can be performed using the relation of partial order:  Ai  sup Ai  and
iJ iJ
 Ai  inf  Ai  . Since sum and product of the events always exist in S  E  , the field of events
iJ iJ
S  E  is a complete distributive lattice.</p>
      <p>As far as partial order is defined only for elements of S  E  , we may apply addition,
multiplication and negation only to events of the same field. The complement to element A in a
lattice with zero is such an element A S that A  A  0 and A  A  I , and the lattice S  E  is
called a lattice with complement if every element has a complement. In the field of events, zero
element O is the impossible event, the unit element I is the certain event, and the complement of an
element A is its negation A . Thus, the field of events S  E  is a Boolean algebra with complement.</p>
      <p>The complete lattice S  E  is called totally distributive [39] if it satisfies duality laws: for an
arbitrary non-empty family of index sets J ,  C
 C J X ,     C X ,    ,</p>
      <p> 
 C J X ,     C X ,    ,</p>
      <p> 
where  is a set of all functions defined on C such that     J and X , , X ,    S  E  ,   J</p>
      <p>Theorem 2 [39]. For an arbitrary random experiment E the field of events S  E  is a totally
isomorphic to the algebra 2 consisting of all subsets of some set M
partially ordered spaces (or Boolean algebras).
distributive complete Boolean algebra.</p>
      <p>Theorem 3 (Tarsky [39]). If complete Boolean algebra S is totally distributive then it is
with respect to the structures of
5.3.</p>
    </sec>
    <sec id="sec-8">
      <title>Random variables</title>
      <p>Let us introduce the following useful definitions.</p>
      <p>Definition 5. The set of events B  Bi iJ from the field of events S  E  is called base if the
following conditions hold:
1) all events Bi from B are mutually exclusive Bi Bj  0 if i  j ;
2) an arbitrary event A from S  E  can be represented as a sum of events Bi from B :</p>
      <p>A  kK Bik .</p>
      <p>A probability distribution P  A in the field of events S  E  depends on a random experiment E</p>
      <p>E
and a random event A . Further we shall suppose that E is fixed, thus P  A depends only on
E
A S  E  .</p>
      <p>By definition, P( A)  lnim hn  A , where hn  A is a relative frequency of the event A . The
probability P  A is a finitely additive function defined on S  E  , but it is not countably additive
because the limit of the relative frequency in not interchangeable for a sequence of mutually exclusive
events from S  E  .</p>
      <p>Now, define a random variable x as a random experiment E with basic numerical set B  E   R1
, or B  E   C . Individual values of a random variable x in the partially ordered space S  E  play
the role of atoms of a lattice. Often, it is convenient to consider a random variable x as a function
defined on a basic set of S  E  and map every elementary event Bi  B  E  to a number x  x  Bi  .
These definitions of a random variable are equivalent.</p>
      <p>Consider a concept of probability distribution on a field of events generated by values of a random
variable. At first, consider random variables taking on values in the set of rational numbers Q . Then,
we shall extend this concept to random variables with real values.</p>
      <p>Denote by BE  x a set of all possible rational values of random variable x in a random
experiment E . Suppose, BE  x  Q . Then S  E   2Q.</p>
      <p>Definition 6. A random variable x taking rational values is said to be continuous if its distribution
function Fx u  is continuous in R1 . Respective distribution of probabilities is said to be continuous.</p>
      <p>Definition 7. A random variable x with rational values is said to be singular if there exists such

subset   a1, a2 ,..., an ,...  Q that p E,an  pn  0 n  N and  pn  1. Respective
n1
distribution of probabilities is said to be singular.</p>
      <p>Theorem 4 [38]. Let F u  be an arbitrary continuous distribution function in R1 then there exists
a random experiment E with BE  Q and distribution of probability p  E, A , A  Q such that for
every u  R1 p E,Q,u   F u , where Q ,u  Q
,u.</p>
      <p>Theorem 5 [38]. Let F u  be an arbitrary distribution function concentrated on the segment
a,b :</p>
      <p>0, if u  a,
F u   </p>
      <p>1, if u  b.</p>
      <p>Then there exists a random experiment E with numerical base set BE  a,b generating a
distribution of probabilities p  E, A on all subsets A  a,b such that p E,a,u  F u if
u a,b .</p>
      <p>Theorem 5 has the remarkable consequence: if the distribution function F u is continuous then
the distribution of probabilities generated by F u is not a measure. If we suppose the opposite then
we have a measure defined on all subsets of the segment a,b , which is equal to zero at every
onepoint set. This contradicts to classic Ulam’s theorem [40], according to which the measure mentioned
above is equal to zero everywhere.</p>
      <p>The concept of independent events in the new theory is introduced in the following way. Let
p E , A B be the conditional probability of the event A in the experiment E generated by the
series of the trial T . Then, the event A does not depend on B if</p>
      <p>p E , A B  pE , A B  .</p>
      <p>Theorem 6 [38]. Let E be a random experiment, A and B be random events that can occur in
the experiment E . The random event A does not depend on B if and only if</p>
      <p>p E, A B  p E, A .
5.4.</p>
    </sec>
    <sec id="sec-9">
      <title>Operations on random variables</title>
      <p>Let us define multiplication by constant, addition, subtraction, multiplication and division of two
random variables considering a random variable as a function x  B  , B  BT , defined on the set of
elementary outcomes BT of some random trial T . For example, suppose that a set of elementary
outcomes of the random trials T1 and T2 belong to disjoint segments [a, b] and [c, d]. Let random
variables x and y take on their values as a result of the random trials T1 and T2 . Thus, we can
interpret x as a function x  B  defined on [a, b] and y as a function y  B defined on [c, d]. Since
the segments are disjoint, the sets of elementary outcomes of these trials are disjoint also and sum
x  y is no valid.</p>
      <p>Let us introduce a useful concept that we need in further.</p>
      <p>Definition 8. The random experiments E1 and E2 are said to be commutative if
pE1, E2 , A1, A2   pE2 , E1, A2 , A1 . Let Tc  T1,T2 be a composite trial, then BTc  BT1  BT2 ,
where  denotes the Cartesian product of the sets BT1 and BT2 . The results of the random trial Tc is
the random event Bc   B1, B2  , where B1  BE1 and B2  BE2 , so that x takes on value x  B1  , and y
takes on value y  B2  . We would introduce arithmetic operations as  x  y Bc   x B1   y  B2  ,
 x  y Bc   x  B1   y  B2  ,  xy Bc   x B1  y  B2  ,  x  Bc   x B1  , but they are valid only if
 y  y  B2 
random experiments E1 and E2 are commutative. Let us consider the concept of isomorphic
experiments and isomorphic random events. A function  : X  Y defined on an ordered set X and
taking on values in an ordered set Y is said to be an isotonic function, if x  y implies   x    y</p>
      <p>An isotonic function that is invertible is said to be an isomorphism. Thus, an isomorphism between
ordered sets is a single-valued mapping that satisfies these conditions. This is an inverse isotonic
property of the mapping  that is called a structural isomorphism of ordered sets X and Y .</p>
      <p>Definition 9. Let E1 and E2 be two random experiment and let S  E1  and S  E2  be the fields of
random events generated by E1 and E2 , respectively. We shall call the fields of events S  E1  and
S  E2  isomorphic, if between their elements there exists a one-to-one mapping  , which is a
structural isomorphism of Boolean algebras S  E1  and S  E2  , such that the probability of random
events is p E1, A  pE2 ,  A , where the experiments E1 and E2 are isomorphic.</p>
      <p>That is, two fields of events S E1  and SE2  are isomorphic if there exists one-to-one mapping
:SE1  SE2  , which is isotonic in respect with ordering of correspondent events S E1  and
SE2  in the Boolean algebras and has the inverse isotonic property:
pE1, A  pE2 ,  A . This mapping  is said to be a probabilistic isomorphism.
A  SE1 </p>
      <p>Theorem 7 [38]. The fields of events S  Ec  and S  Ec  generated by the composite experiments
Ec  E1, E2 and Ec  E2 , E1 are isomorphic if the random experiments E1 and E2 are
commutative.</p>
      <p>Using the above theorem we can introduce addition and multiplication of random variables x and
y , produced in random experiments E1 and E2 , respectively. The sum x  y and the product xy
take on their values in composite experiments Ec   E1, E2  , and, y  x and yx take on their values
in composite experiment Ec   E2 , E1  . Let us consider the random value x  y as a result of a
random experiment Ec with numerical base BEc  BE1  BE2  x  y : x  BE1 , y  BE2  , where
BEi , i  1, 2 is the numerical bases of the random experiment Ei .</p>
      <p>The random variable y  x is identified with a random experiment Ec with numerical base
BEc  BEc  BE2  BE1  BE2  BEc . The field of events S  Ec  and S  Ec  are isomorphic when
random experiments E1 and E2 are commutative.</p>
      <p>Then, the random variables x  y and y  x are isomorphic. If isomorphic objects are identified,
we can write x  y  y  x . The similar results is true for multiplication: xy  yx . Notice, that for the
of all possible subsets of numerical sets BEc  BEc , and therefore the random experiments Ec and Ec
generate the identical distributions of probabilities on these fields.</p>
    </sec>
    <sec id="sec-10">
      <title>6. Conclusion</title>
      <p>The new frequency-based approach based on the concept of a characteristic matrix of a random
experiment eliminates the need to formalize the von Mises rule for acceptable collective selection. In
the new model, rows and columns of the characteristic matrix automatically form collectives. Using
the topological properties of the sets of numbers represented by the rows and columns of the
characteristic matrix makes it easy to apply the new randomness criterion in practice. In addition, we
propose correct way to define arithmetic operations on random variables in the frame of new model.</p>
    </sec>
    <sec id="sec-11">
      <title>7. References</title>
      <p>[30]
[31]
550.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Turing</surname>
          </string-name>
          ,
          <article-title>Computing machinery and intelligence</article-title>
          ,
          <source>Mind</source>
          <volume>236</volume>
          (
          <year>1950</year>
          )
          <fpage>433</fpage>
          -
          <lpage>460</lpage>
          . doi:
          <volume>10</volume>
          .1093/mind/LIX.236.433.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>J. von Neumann</surname>
          </string-name>
          ,
          <article-title>Various techniques used in connection with random digits</article-title>
          , in: (Eds.) A.
          <string-name>
            <surname>Householder</surname>
            ,
            <given-names>G.E.</given-names>
          </string-name>
          <string-name>
            <surname>Forsythe</surname>
            , and
            <given-names>H.H.</given-names>
          </string-name>
          <string-name>
            <surname>Germond Monte Carlo Method</surname>
          </string-name>
          .
          <source>National Bureau of Standards Applied Mathematics Series</source>
          <volume>12</volume>
          (
          <year>1951</year>
          ):
          <fpage>36</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Downey</surname>
          </string-name>
          , Randomness, Computation and Mathematics, in: S.B.
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Dawar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          Löwe (Eds.),
          <source>How the World Computes, CiE</source>
          <year>2012</year>
          , Lecture Notes in Computer Science, SpringerVerlag, Berlin, Heidelberg,
          <year>2012</year>
          , vol.
          <volume>7318</volume>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -30870-3_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>P.M.B. Vitanyi Randomness</surname>
          </string-name>
          ,
          <year>2001</year>
          , arXiv:math/0110086v2. URL: https://arxiv.org/abs/math/ 0110086.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.M.B. Vitanyi</surname>
          </string-name>
          ,
          <article-title>An Introduction to Kolmogorov Complexity and its Applications</article-title>
          .
          <source>Third Edition</source>
          , Springer-Verlag, New York,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wald</surname>
          </string-name>
          , Die Wiederspruchsfreiheit des Kollektivbegriffes der Wahrscheinlich-keitsrechnung,
          <source>Ergebnisse eines Mathematischen Kolloquiums</source>
          <volume>8</volume>
          (
          <year>1937</year>
          )
          <fpage>38</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Church</surname>
          </string-name>
          ,
          <article-title>On the concept of a random sequence</article-title>
          ,
          <source>Bull. AMS</source>
          ,
          <volume>46</volume>
          (
          <year>1940</year>
          )
          <fpage>130</fpage>
          -
          <lpage>135</lpage>
          . doi:
          <volume>10</volume>
          .1090/S0002-9904-1940-07154-X.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ville</surname>
          </string-name>
          Etude critique de la notion de collectif, Gauthiers-Villars, Paris,
          <year>1939</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>M. van Lambalgen</surname>
          </string-name>
          ,
          <article-title>Randomness and foundations of probability: von Misesʼs axiomatisation of random sequences</article-title>
          , in: T. Ferguson et al. (Eds.), Probability, Statistics and Game Theory:
          <article-title>papers in honour of David Blackwell</article-title>
          . Institute for Mathematical Statistics Monograph Series,
          <volume>20</volume>
          (
          <year>1996</year>
          )
          <fpage>347</fpage>
          -
          <lpage>367</lpage>
          . doi:
          <volume>10</volume>
          .1214/lnms/1215453582.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. van Lambalgen</surname>
          </string-name>
          ,
          <article-title>Von Misesʼ definition of random sequences reconsidered</article-title>
          ,
          <source>J. Symb. Logic</source>
          <volume>52</volume>
          (
          <year>1987</year>
          )
          <fpage>725</fpage>
          -
          <lpage>755</lpage>
          . doi:
          <volume>10</volume>
          .1017/S0022481200029728 .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>M. van Lambalgen</surname>
          </string-name>
          ,
          <article-title>The axiomatisation of randomness</article-title>
          ,
          <source>J. Symb. Logic</source>
          <volume>55</volume>
          (
          <year>1987</year>
          )
          <fpage>1143</fpage>
          -
          <lpage>1167</lpage>
          . doi:
          <volume>10</volume>
          .2307/2274480.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. van Lambalgen</surname>
          </string-name>
          ,
          <article-title>Independence, randomness and the axiom of choice</article-title>
          ,
          <source>J. Symb. Logic</source>
          <volume>57</volume>
          (
          <year>1992</year>
          )
          <fpage>1274</fpage>
          -
          <lpage>1304</lpage>
          . doi:
          <volume>10</volume>
          .2307/2275368.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frechet</surname>
          </string-name>
          ,
          <article-title>Expose et discussion de quelques recherches recentes sur les fondements du calcul des probabilities, in: Colloque consacre au calsul des probabilites</article-title>
          .
          <source>Proceedings of the conference held at the Universite de Geneve</source>
          ,
          <year>1937</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Kolmogorov</surname>
          </string-name>
          , Grundbegriffe der Wahrscheinlichkeitsrechnung.
          <source>Ergebnisse der Mathematic und ihrer Grenzgebiete</source>
          , Springer-Verlag, Berlin, Heidelberg,
          <year>1933</year>
          . doi: 0.1007/978- 3-
          <fpage>642</fpage>
          -49888-6.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>