=Paper=
{{Paper
|id=Vol-2353/paper70
|storemode=property
|title=Soft Decoding Based on Ordered Subsets of Verification Equations of Turbo-Productive Codes
|pdfUrl=https://ceur-ws.org/Vol-2353/paper69.pdf
|volume=Vol-2353
|authors=Alexandr Kuznetsov,Anastasiia Kiian,Kateryna Kuznetsova,Vlada Hryhorenko,Oleksii Smirnov,Dmytro Prokopovych-Tkachenko
|dblpUrl=https://dblp.org/rec/conf/cmis/KuznetsovKKHSP19
}}
==Soft Decoding Based on Ordered Subsets of Verification Equations of Turbo-Productive Codes==
Soft Decoding Based on Ordered Subsets of Verification
Equations of Turbo-Productive Codes
Alexandr Kuznetsov 1[0000-0003-2331-6326], Anastasiia Kiian 1[0000-0003-2110-010X],
Kateryna Kuznetsova 1[0000-0002-5605-9293], Tetiana Ivko 1[0000-0003-1772-0074],
Oleksii Smirnov 2[0000-0001-9543-874X], Dmytro Prokopovych-Tkachenko 3[0000-0002-6590-3898]
1
V. N. Karazin Kharkiv National University, Svobody sq., 4, Kharkiv, 61022, Ukraine
kuznetsov@karazin.ua, nastyak931@gmail.com,
kate.kuznetsova.2000@gmail.com, t.ivko@outlook.com
2
Central Ukrainian National Technical University, avenue University, 8, Kropivnitskiy, 25006,
Ukraine, dr.smirnovoa@gmail.com
3
University of Customs and Finance, st. Volodymyr Vernadsky, 2/4, Dnipro, 49000, Ukraine,
omega2@email.dp.ua
Abstract. Methods of soft decoding of cascade code constructions based on the
schemes-products of linear block codes (Turbo Product Codes) are considered.
An approach is being developed based on the iterative exchange of soft solu-
tions between block codes constituting a cascade design. It is shown that a se-
quential execution of procedures for the formation of ordered subsets of test
equations and the logarithms estimation of a likelihood ratio allows decoding of
turbo-productive codes according to the criterion of minimizing the erroneous
reception of code symbols.
Keywords. Cascade Structures, Turbo Product Codes, Soft Decoding, Verifica-
tion Equations, Noise Immunity.
1 Introduction
A promising area in the development of noise-resistant coding theory is cascade code
structures [1-7, 32-33], methods and algorithms for their decoding with an iterative
exchange of soft solutions that allow to provide a required noise immunity of discrete
message transmission [8-14].
It should be noted that the implementation complexity of decoding methods based
on the use of decision functions increases with length of the code and the correcting
capacity [14-17]. Decoding complexity can be reduced by using decision functions
defined on a preformed subset of check equations [18-20]. At the same time, this
decrease also leads to a decrease in the energy gain [19, 20].
Thus, an actual direction of research is a development (improvement) of decoding
methods with soft solutions based on decisive functions, which, without significantly
reducing the energy gain from coding, would significantly reduce the complexity of
practical implementation. A promising direction in this sense is the formation of or-
dered subsets of test equations and decoding methods based on them.
2 Theoretical substantiation of the proposed decoding method
The theoretical basis for soft decoding methods is a criterion for testing hypotheses,
the mathematical justification for which is based on the total probability formula and
the Bayes theorem [18-20].
Suppose that one can make mutually M exclusive assumptions (hypotheses) H1 ,
H 2 , …, H M about the setting of the experience, and an event A can appear only
with one of these hypotheses.
Then the probability of an event is calculated by the formula of total probability:
P A P H1 P A H1 P H 2 P A H 2 ... P H M P A H M
M
P H P A H ,
i 1
i i
where P H i is the probability of the hypothesis H i ; P A H i - conditional prob-
ability of an event A with this hypothesis.
If prior to the experiment, probabilities of the hypotheses
were P H i , i 1, 2,..., M and as a result of the experiment an event A occurred,
then the a posteriori (experimental, subject to the occurrence of the event A ) hy-
potheses probabilities are calculated using the Bayes formula:
P Hi P A Hi
P H i A M , i 1, 2,..., M .
PH P A H
i 1
i i
The Bayes formula makes it possible to calculate the conditional probabilities of
occurrences of the following events, taking into account the posterior probabilities of
hypotheses, P H i A , i 1, 2,..., M . So, if after the first experiment in which an
event A occurred, the next experiment B is performed, in which an event may oc-
cur, the conditional probability P B A is calculated using the formula of total prob-
ability, into which not a priori probabilities P H i are substituted, but a posteriori,
calculated after the occurrence of the event A , probabilities P H i A , i.e. we will
receive:
M
P B A P H A P B H A , i 1, 2,..., M ,
i 1
i i
where H i A is an event A under the hypothesis H i , P B H i A is the conditional
probability of co-being B under the hypothesis H i and event A .
Suppose now that the demodulator, based on the observation of the received signal
and noise interference, estimates which of the possible signals Si S1 , S2 ,..., S M
(from an ensemble of signals with power M ) was transmitted. You will make mutu-
ally exclusive assumptions M (hypotheses) that the corresponding signal Si
i 1, 2,..., M has been transmitted,. We calculate the posterior probability of the i
hypothesis, subject to admission: S *
P Si P S * Si
P Si S * M , i 1, 2,..., M , (1)
PS PS * S
i 1
i i
where P Si - a priori probability of formation S * of a signal Si by the transmitter;
P S * Si - conditional probability of reception under the condition that the signal
Si is formed by the transmitter.
It is usually S * represented as a continuous random variable underlying the hy-
pothesis testing criteria. Consider the probability distribution function P S * :
M
P S * PS PS * S .
i 1
i i
P S * - is a probability distribution function of the mixture of signal and interfer-
ence S * , which gives test statistics in the full signal space S1 , S2 ,..., S M .
In equation (1), the value of the function p S * is the scaling factor, since the val-
ue P S * is obtained by averaging over the entire space of the signals.
Consider a case for two signals. Let binary logic elements 1 and 0 be represented
by signals S1 1 and S2 1 . A rigid decision rule, called as a maximum likelihood
rule, determines a choice of one of the hypotheses (corresponding to the transmission
of signals S1 and S2 , accordingly) based on the comparison of probabilities values
P S * x S1 and P S * x S2 the choice of the larger one. For each data bit trans-
mitted, it is decided that the signal S1 was transmitted if S * x falls on the right side
of the decision line (indicated ), or that the signal S2 was otherwise transmitted.
A similar decision rule, known as the maximum a posteriori probability (MAP),
can be represented as a minimum error probability rule, taking into account the prior
probability of data. In general, the MAP rule is expressed as follows:
S1 , if P S * x S1 P S * x S2
S , (2)
S2 , if P S * x S1 P S * x S2
where S - value of the signal corresponding to the decision.
Thus, expression (2) establishes the rule for choosing one of the hypotheses corre-
sponding to the signals S1 and S2 . Using expression (1), we obtain the equivalent
expression:
S1 , if P S1 P S * S1 P S2 P S * S2
S ,
S2 , if P S1 P S * S1 P S2 P S * S2
where probability
M
P S * PS PS * S
i 1
i i
in both parts of inequality reduced.
Using (2) we introduce a function as a ratio of likelihood functions
P S * x S1 and P S * x S2 :
P S1 P S * S1
F , (3)
P S2 P S * S2
then the rule for choosing one of the hypotheses is written as
S , если F 1
S 1 . (4)
S2 , если F 1
Let us translate the expression (3), we get:
P S1 P S * S1
ln F ln ln .
P S2 P S * S2
Thus, a logarithm of the ratio of likelihood functions ln F is a real representation
of the soft solution at the decoder input, with first term on right side of the equality
being the logarithm of the relations of a priori probabilities P S1 and P S2
P S1
LS S1 , S2 ln ,
P S2
and the second term is the essence of the logarithm of the posterior probability ratio
P S * S1 and P S * S2 :
P S * S1
LDS S1 , S2 ln
P S * S2
as a result of channel measurements in the receiver.
So, the logarithm of the likelihood function LFS ln F is rewritten as
LFS S1 , S2 LS S1 , S2 LDS S1 , S2 . (5)
It should be noted that for AWGN channels, the logarithm of the likelihood func-
tion as the result of channel measurements of the received mixture of signal and noise
in the receiver will be as follows:
1 1 S * 1 2
exp
P S * S1 2 2
LDS S1 , S2 ln ln
P S * S2 1 1 S * 1 2
2 exp 2
2 2
1 S * 1 1 S * 1 2
2 S *.
2 2
Considering the ratio
1 2 Eb
2
,
N0
Eb
where - is the ratio of energy of a binary signal Eb to the spectral power density
N0
of the noise N 0 , we obtain:
Eb
LDS S1 , S2 4 S*,
N0
those value of a logarithm of posterior probability ratio P S * S1 and P S * S2 , as
a result of channel measurements at the receiver, depends exclusively on the signal-
to-noise ratio and the value of the received signal and noise mixture S * .
In [20], it was shown that for systematic codes, the soft decision at the decoder
output (on a logarithmic scale) about received symbol is written in the form of ex-
pression
LFDK S1 , S2 , C1 , C2 LFS S1 , S2 LDK c1 , c2 , (6)
where LDK С1 , С2 is the logarithm of the likelihood function relation on the received
symbol, obtained as a result of decoding.
Substituting (5) into (6) we get:
LFDK S1 , S2 , C1 , C2 LS S1 , S2 LDS S1 , S2 LDK c1 , c2 , (7)
those the soft decision at the decoder output depends on three values: LS S1 , S2 - a
logarithm of the ratio of the prior probabilities of the signals S1 and S2 ; -a logarithm
of the ratio of the posterior probabilities of the signals S1 and S2 (the result of channel
measurements) and LDK С1 , С2 - a logarithm of ratio of the likelihood functions of
binary code symbols C1 and C2 as the result of decoding.
To get LFDK S1 , S2 , С1 , С2 , you need to sum up the individual contributions, since
all three components are statistically independent [20]. Soft decoder out-
put LFDK S1 , S2 , С1 , С2 is a real number, providing both the hard decision itself and
its reliability. The sign LFDK S1 , S2 , С1 , С2 sets a hard decision, i.e.:
С 1, if LFDK S1 , S2 , с1 , с2 0
сi 1 , (8)
С2 0, if LFDK S1 , S2 , с1 , с2 0
where сi is the value of the i -th bit corresponding to the taken decision.
An eigenvalue LFDK S1 , S2 , С1 , С2 determines the reliability of the decision.
As a rule, ta value LDK С1 , С2 has the same sign as LFDK S1 , S2 , С1 , С2 , thus in-
creasing the reliability of the decision.
For statistically independent values x and y , the sum of two logarithmic likelihood
ratios L( x) and L( y ) is determined by the following expression:
e L x e L y
L x L y L x y ln
1 e e
L x L y
(9)
1 sgn L x sgn L y min L x , L y ,
where function sgn z returns a sign of its argument z , and the sign "" is used to
denote the sum of data modulo 2 represented by binary digits. The sign is used to
denote the sum of the logarithms of the likelihood functions, which is defined as the
logarithm of the likelihood function of the sum modulo 2 of the corresponding argu-
ments.
An implementation of the turbo decoding procedure involves the use of decoding
methods with a soft solution at the input and a soft solution at the output. During the
first iteration on such a decoder, the data is considered equally probable, which gives
the initial a priori value LS S1 , S2 0 in equation (7). Channel measurement gives
the value LDS S1 , S2 that is obtained by taking the logarithm of the ratio of the val-
ues P S * x S1 and P S * x S2 for certain values and is the second member of
equation (7). The decoder output LDK С1 , С2 is information derived from the decod-
ing process. For iterative decoding, the external likelihood is fed back to the input (of
another composite decoder) to update the prior probability of the next iteration infor-
mation, i.e. updates a priori probability:
LS S1 , S2 LDK С1 , С2 .
Thus, the decision in the final decoding of each character of the code sequence and
information about its reliability depends on the value LFDK S1 , S2 , С1 , С2 . Based on
equation (7), we write the algorithm that gives an estimate of the soft output of the
decoder LDK С1 , С2 and the resulting estimate LFDK S1 , S2 , С1 , С2 .
1. Install LS S1 , S2 0 .
2. We decode with the soft solution the first composite code, i.e. find a soft solu-
tion LFDK S1 , S2 , С1 , С2 .
3. Based on equation (7) we calculate
LDK С1 , С2 LFDK S1 , S2 , С1 , С2 LS S1 , S2 LDS S1 , S2
4. For the following composite code install LS S1 , S2 LDK С1 , С2 .
5. With a soft solution, we decode the following composite code, i.e. find a soft so-
lution LFDK S1 , S2 , С1 , С2 .
6. For all composite codes, repeat steps 3-5.
7. The result of turbo decoding is a hard decision about a code symbol с by expres-
sion (8) based on the soft decision obtained in the last step LFDK S1 , S2 , С1 , С2 .
Thus, as the analysis of above algorithm shows, the main task in implementation
of turbo decoding is a development of efficient soft decoding procedures for compos-
ite codes, i.e. development of soft decision LDK С1 , С2 calculation procedures for an
iterative exchange procedure in the process of turbo decoding.
We study the procedures for finding the soft solution LDK С1 , С2 at the decoder
output, analyze the possible ways to calculate the last term on the right side of equal-
ity (7) - the logarithm of the ratio of the likelihood functions of binary code symbols
C1 and C2 as a result of decoding.
Consider a linear n, k , d block code over a finite field GF (2) . A linear code as a
subspace GF k (2) GF n (2) is defined by the generator matrix G , the lines of which
form the basis of the linear space GF k (2) . By definition, for each linear code there is
an orthogonal completion - a subspace GF n k (2) GF n (2) , all elements of which are
orthogonal to the elements of GF k (2) . The basis of the linear space GF n k (2) is giv-
en by the check matrix H , and the mutual orthogonality condition implies equal-
ity GH T 0 , where by “0” is meant the k r matrix of zero elements GF (2) .
We write the last equality in the form сH T 0 , where с с0 , с1 ,..., сn 1 is the ar-
bitrary code word of the linear block n, k , d code under consideration, i.e.
c GF (2) ci 0,1 .
k
Taking into account the fact that all elements GF n k (2) can be expressed in terms
of a linear combination of rows of a check matrix H , we have сhiT 0 :,
where hi hi0 , hi1 ,..., hin1 is an arbitrary vector obtained by a linear combination of
rows of a matrix H , i 0,1,..., 2n k 1 .
In other words, the last equality holds for all 2 n k vectors from GF n k (q ) and we
have a system of test equations:
c0 h00 c1h01 ... c0 h0n1 0;
c0 h10 c1h11 ... c0 h1n1 0;
(10)
...
c0 h nk c1h 2nk 1 ... c0 h 2nk 1 0.
2 10 1 n1
Suppose now that the code word с с0 , с1 ,..., сn 1 is taken by the criterion of the
maximum a posteriori probability, i.e. the values of the log-rhymes of the posterior
probabilities P S * S1 and P S * S2 :
P S * S1
LDS (с j ) LDS S1 , S2 ln
P S * S2
about each code symbol с j , j 0,1,..., n 1 as a result of channel measurements of the
corresponding signals in the receiver.
The logarithms of the relations of a priori probabilities P S1 and P S2 , corre-
sponding to each of the code symbols с j , j 0,1,..., n 1 we denote
P S1
LS с j LS S1 , S2 ln .
P S2
Then, taking into account (7) and rule (9) for the i -th checking equation, we have
LDKi c j
LS c0 LDS c0 hi LS c1 LDS c1 hi ...
0 1
j 1
LS c j 1 LDS c j 1 hi LS c j 1 LDS c j 1 hi ... j 1
n 1 if hi j 1;
LS cn 1 LDS cn 1 hi
n1 l 0,
LS cl LDS cl hil (11)
l j
LS c0 LDS c0 hi0 LS c1 LDS c1 hi1 ...
n 1 if hi j 0,
LS cn1 LDS cn1 hin1 LS cl LDS cl hil
l 0
where the summation of " " and " " is carried out according to the rule of
adding likelihood logarithms, i.e. by expression (9).
If we assume that all the estimates LDKi c j j 0,1,..., n 1 are statistically inde-
pendent (for example, if the test equations are mutually orthogonal), then the resulting
estimate LDK c j will be written as:
2nk 1
LDK c j ,
LDK c j i
(12)
i0
where the summation is performed according to the usual arithmetic rule of addition
of real numbers.
The soft output of the decoder LFDK с j LFDK S1 , S2 , С1 , С2 is a real number,
and is determined by the expression (7):
LFDK с j LS с j LDS с j LDK c j
2nk 1 (13)
LS с j LDS с j LDK c j . i
i0
The sign LFDK с j sets a tough decision according to rule (8):
С1 1, if LFDK с j 0;
сj
С2 0, if LFDK с j 0.
Expressions (11), (12) and (13) define the decisive function based on using loga-
rithms of the ratio of likelihood functions of received signals (calculated using a priori
and a posteriori probabilities), as well as the logarithm of the ratio of likelihood func-
tions of binary code characters as a result of decoding. The corresponding sum (12)
defines the decision function based only on the use of the decoding result.
Let us analyze the expression (12). Expanding the summation sign according to
rule (11), we obtain that expression (12) contains 2 n k terms, each of which is the
result of summation of the n logarithms of the likelihood of code symbols. In turn, the
likelihood logarithms of code symbols are the sum of the likelihood logarithms of the
received signals (calculated using a priori and a posteriori probabilities). It is obvious
that with an increase in the code parameters n, k , d , the number of terms increases
rapidly and already with the application n k 32 of the considered approach it be-
comes computationally inexpedient. A promising direction in this sense is the devel-
opment of a rule for the formation of ordered subsets of check equations and a theo-
retical substantiation on their basis of decisive functions for decoding methods with
soft solutions.
3 Conclusions
As a result of the conducted research, the method of soft decoding of cascade code
constructions with iterative exchange of soft solutions was improved which differs
from the known methods by the accelerated procedure of selecting test equations with
the most reliable symbols, which allows realizing decoding of code words by the
criterion of minimizing the erroneous reception of code symbols and speeding up the
process of turbo decoding of concatenated codes.
The obtained results may be useful in constructing information security code
schemes [21-26], for example, as a real alternative to traditional cryptography for
post-quantum applications [27]. In addition, research results may be useful for opti-
mizing computing in modern telecommunications networks [28-31, 34-35].
References
1. Gomtsyan, H.A.: Computer simulation of cascade codes for CDMA systems. In: Proceed-
ings of the Second International Symposium of Trans Black Sea Region on Applied Elec-
tromagnetism (Cat. No.00TH8519), Xanthi, Greece, 2000, p. 105. (2000)
doi:10.1109/AEM.2000.943262
2. Perez, L.C., Costello, D.J.: Cascaded convolutional codes. In: Proceedings of 1995 IEEE
International Symposium on Information Theory, Whistler, BC, Canada, 1995, p. 160.
(1995) doi:10.1109/ISIT.1995.531509
3. Permuter, H.H., Weissman, T.: Cascade source coding with side information at first two
nodes. In: IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo),
Cairo, 2010, pp. 1–5. (2010) doi:10.1109/ITWKSPS.2010.5503190
4. Zhang, S., Song, R., Yang, F.: Joint design of QC-LDPC codes for cascade-based multi-
source coded cooperation. In: International Conference on Wireless Communications &
Signal Processing (WCSP), Nanjing, 2015, pp. 1–4. (2015)
doi:10.1109/WCSP.2015.7340967
5. Kuznetsov, A., Serhiienko, R., Prokopovych-Tkachenko, D.: Construction of cascade codes
in the frequency domain. 4th International Scientific-Practical Conference Problems of In-
focommunications. Science and Technology (PIC S&T 2017), Kharkov, 2017, pp. 131–136.
(2017) doi:10.1109/INFOCOMMST.2017.8246366
6. Chen, H., Ling, S., Xing, C.: Quantum codes from concatenated algebraic-geometric codes.
IEEE Transactions on Information Theory, 2005, vol. 51(8), pp. 2915–2920, (2005)
doi:10.1109/TIT.2005.851760
7. Changuel, S., Le Bidan, R., Pyndiah, R.: Iterative Decoding of Block Turbo Codes over the
Binary Erasure Channel. In: IEEE International Conference on Signal Processing and
Communications, Dubai, 2007, pp. 1539–1542. (2007) doi:10.1109/ICSPC.2007.4728625
8. Landolsi, M.A.: A Comparative Performance and Complexity Study of Short-Length LDPC
and Turbo Product Codes. In: 2nd International Conference on Information & Communica-
tion Technologies, Damascus, 2006, pp. 2359–2364. (2006)
doi:10.1109/ICTTA.2006.1684775
9. Nakajima, S., Sato, E.: Trellis-coded 8-PSK scheme combined with turbo and single-parity-
check product codes. In: Proceedings IEEE 56th Vehicular Technology Conference, Van-
couver, BC, Canada, 2002, vol. 3, pp. 1782–1786. (2002)
doi:10.1109/VETECF.2002.1040523
10. Stasev, Yu.V., Kuznetsov, A.A., Nosik, A.M.: Formation of pseudorandom sequences with
improved autocorrelation properties. Cybernetics and Systems Analysis, 2007, vol. 43(1),
pp. 1–11. (2007) doi:10.1007/s10559-007-0021-2
11. Berrou, C., Glavieux, A., Thitimajshima, P.: Near Shannon Limit Error- Correcting Coding
and Decoding: Turbo Codes. In: Proceedings of ICC '93 - IEEE International Conference on
Communications, Geneva, Switzerland, 1993, vol. 2, pp. 1064–1070. (1993)
doi:10.1109/ICC.1993.397441
12. Stasev, Yu.V., Kuznetsov, A.A.: Asymmetric code-theoretical schemes constructed with the
use of algebraic geometric codes. Kibernetika i Sistemnyi Analiz, 2005, vol. 3, pp. 47–57.
(2005)
13. Wang, F.G., Tang, Y., Yang, F.: The iterative decoding algorithm research of Turbo Prod-
uct Codes. In: The 2010 International Conference on Apperceiving Computing and Intelli-
gence Analysis Proceeding, Chengdu, 2010, pp. 97–100. (2010)
doi:10.1109/ICACIA.2010.5709859
14. MacKay, D.J.C., Neal, R.M.: Near Shannon limit performance of low density parity check
codes. IEEE Electronics Letters, 1996, vol. 33(6), pp. 457–458. (1996)
doi:10.1049/el:19970362
15. Turbo Product Code Encoder / Decoder. www.aha.com
16. IEEE 802.16 Broadband Wireless Access Working Group. Turbo Code Comparison (TCC v
TPC). http://ieee802.org/16
17. Turbo Product Code FEC. Comtech EF Data Corp. www.comtechefdata.com
18. MacWilliams, F., Sloane, N.: The Theory of Error-Correcting Codes. Elsevier (1977)
19. Proakis, J.: Digital communications. McGraw Hill (2001)
20. Sklar, B.: Digital Communications: Fundamentals and Applications. Prentice Hall Commu-
nications Engineering and Emerging Techno. Pearson Education (2016)
21. QC-MDPC KEM: A Key Encapsulation Mechanism Based on the QC-MDPC McEliece
Encryption Scheme”, NIST Submission, 2017. https://csrc.nist.gov/Projects/Post-Quantum-
Cryptography/Round-1-Submissions
22. Wang, Y.: RLCEKey Encapsulation Mechanism (RLCE-KEM) Specifcation. NIST Sub-
mission. (2017) http://quantumca.org
23. Melchor, C.A., Aragon, N., Bettaieb, S., Bidoux, L., Blazy, O., Deneuville, J.-C., Gaborit,
P., Zemor, G.: Rank Quasi-Cyclic (RQC). NIST Submission. (2017) http://pqc-rqc.org
24. Post-Quantum Cryptography, Round 1 Submissions, 2017.
https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions
25. Kuznetsov, A., Kiian, A., Lutsenko, M., Chepurko, I., Kavun, S.: Code-based cryptosys-
tems from NIST PQC. In: 2018 IEEE 9th International Conference on Dependable Systems,
Services and Technologies (DESSERT), Kyiv, Ukraine, 2018, pp. 282–287. (2018)
doi:10.1109/DESSERT.2018.8409145
26. Kuznetsov, A., Pushkar'ov, A., Kiyan, A., Kuznetsova, T.: Code-based electronic digital
signature. In: 2018 IEEE 9th International Conference on Dependable Systems, Services
and Technologies (DESSERT), Kyiv, Ukraine, 2018, pp. 331–336. (2018)
doi:10.1109/DESSERT.2018.8409154
27. Bernstein, D., Buchmann, J., Dahmen, E.: Post-Quantum Cryptography. Springer-Verlag,
Berlin-Heidleberg (2009)
28. Rassomakhin, S.G.: Mathematical and physical nature of the channel capacity.
Telecommunications and Radio Engineering, 2017, vol. 76(16), pp. 1423–1451. (2017)
doi:10.1615/TelecomRadEng.v76.i16.40
29. Krasnobayev, V.A., Koshman, S.A.: A Method for Operational Diagnosis of Data Repre-
sented in a Residue Number System. Cybernetics and Systems Analysis, 2018, vol. 54(2),
pp. 336–344. (2018) doi:10.1007/s10559-018-0035-y
30. Gorbenko, I.D., Zamula, A.A., Semenko, A.E., Morozov, V.L.: Method for synthesis of
performed signals systems based on cryptographic discrete sequences of symbols. Tele-
communications and Radio Engineering, 2017, vol. 76(17), pp. 1523–1533. (2017)
doi:10.1615/TelecomRadEng.v76.i17.40
31. Tenth UK Teletraffic Symposium. Performance Engineering in Telecommunications Net-
work. In: Tenth UK Teletraffic Symposium, 10th. Performance Engineering in Telecom-
munications Network, Martlesham Heath, UK
32. Kavun, S.: Conceptual fundamentals of a theory of mathematical
interpretation. Int. J. Computing Science and Mathematics, 2015, vol. 6(2), pp. 107–121.
(2015) doi:10.1504/IJCSM.2015.069459
33. Kuznetsov, A., Kavun, S., Panchenko, V., Prokopovych-Tkachenko, D., Kurinniy, F.,
Shoiko, V.: Periodic Properties of Cryptographically Strong Pseudorandom Sequences. In:
2018 International Scientific-Practical Conference Problems of Infocommunications. Sci-
ence and Technology (PIC S&T), Kharkiv, Ukraine, 2018, pp. 129–134. (2018)
doi:10.1109/INFOCOMMST.2018.8632021
34. Zamula, A., Kavun, S.: Complex systems modeling with intelligent control elements. Int. J.
Model. Simul. Sci. Comput., 2017, vol. 08(01). (2017) doi:10.1142/S179396231750009X
35. Kavun, S., Zamula, A., Mikheev, I.: Calculation of expense for local computer networks. In:
Scientific-Practical Conference Problems of Infocommunications. Science and Technology
(PIC S&T), 2017 4th International, Kharkiv, Ukraine, 2017, pp. 146–151. (2017)
doi:10.1109/INFOCOMMST.2017.8246369