<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Mathematical Optimization Based Channel Coding: Current Achievements and Future Challenges</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Helmling</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Scholl</string-name>
          <email>scholl@eit.uni-kl.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Akın Tanatmis</string-name>
          <email>tanatmisg@mathematik.uni-kl.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electrical and Computer Engineering</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics University of Kaiserslautern</institution>
          ,
          <addr-line>67653 Kaiserslautern</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>- Channel coding and mathematical optimizationtwo omnipresent branches of science which heavily influence our everyday life, which is certainly unimaginable without the epochal achievements of each of the two disciplines since they affect nearly every communication system as well as every transportation, manufacturing, and organization process. The following report is dedicated to some of the achievements of a research project decisively influenced by a cooperative interplay of these two disciplines.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. CHANNEL CODING</title>
      <p>Channel coding is an important technique for the correction
of transmission errors that is used in nearly every
communication system today. It is used in mobile phones as well
as in satellite communication, navigation systems, storage
devices like hard disks, CDs and also in internet or broadcast
applications. Let us first review what channel coding is and
why we need it. To understand the problems channel coding
solves, we will have a look at the old days. As an example
we take a communication system without channel coding that
becomes more and more obsolete: analog TV.</p>
      <p>Television broadcasting is a form of one way
communication. On the one hand we have a transmitter, which is
usually placed on top of a mountain and transmits radio waves
with high power to reach an area as big as possible. On the
other hand we have the receivers, ordinary TV sets, in our
households, that pick up the radio waves.</p>
      <p>If the connection between transmitting antenna and
receiving antenna is of good quality, we a see clear picture and
hear a comfortable sound. But what happens if the quality of
the connection is not so good? Let us assume our receiver
is placed far away from the transmitter, behind a mountain,
in the cellar or simply in bad weather, so that the reception
quality drops. Then the picture is distorted or, even worse, the
whole picture bounces up and down. Also the sound might be
clicking and crackling, disturbing the viewers enjoyment. The
TV signal is noisy or distorted.</p>
      <p>Figure 1 shows such a situation where we have two TV
watchers, A and B. Whereas A is close to the transmitter and
receives a clear picture, B is not in such a good position. He
has worse receiving conditions due to serveral obstacles.</p>
      <p>The goal of new digital TV broadcasting systems is to make
picture and sound more resistant to distortion and noise. In
Fig. 1. TV transmitter and two receivers</p>
      <p>Fig. 2. TV pictures from A and B
contrast to the presented analog system, there are additional
possibilities for techniques to improve the link from the
transmitter to the TV set at home. One of them is channel
coding. Channel coding is a technique to detect and correct
occuring errors. But how is that possible?</p>
      <p>Whenever a signal is transmitted in modern digital
communication systems, the data is transmitted in form of bits or
bitstreams, respectively. To allow error correction on the receiver
side, we have to make some preprocessing before transmission,
called encoding. To encode the data, the bitstream is first cut
into small blocks. At the end of every block some extra bits,
called parity bits, are added as shown in Figure 3. The k data
bits together with the r parity bits form a vector of n bits
which is called codeword the c.</p>
      <p>Fig. 3. A bitstream is being encoded
During encoding the additional parity bits are calculated
10−70</p>
      <p>without
channel coding
less errors
less
transmit power
with
channel coding
uncoded
with channel code
s2ignal−t4o−noise 6ratio (8SNR)[db]10</p>
      <p>12
from the data bits. The “way” we calculate the parity bits
is called channel code and determines the performance of the
channel coding system. Since data is processed block by block,
the channel codes described here are called linear block codes.</p>
      <p>A linear block code is defined by a binary r n matrix H,
so that every bitvector c that fullfills</p>
      <p>H c = 0</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(where the calculation takes place modulo 2) is a codeword. H
is called parity-check matrix. By C = fc 2 f0; 1gn : H c
0 mod 2g we denote the set of all codewords [1]. Finding a
good channel code with its matrix H is non-trivial and subject
to research since decades.
      </p>
      <p>After encoding the data it is sent to the receiver. During
transmission some bits may be distorted and change their
values.</p>
      <p>The actual correction of corrupted data takes place in the
receiver by a device called channel decoder, that tries to
“repair” the incoming blocks. For this purpose the decoder
uses the previously added parity bits. In many cases this
technique works well and many errors can be successfully
corrected.</p>
      <p>Using channel coding we can handle more weak signals
without any loss of quality or provide much more reliable data
for a given signal strength. This advantage can be translated
into equipment cost reduction, for example smaller antennas,
cheaper electronics or smaller batteries.
transmitter errors occur receiver areWneecneoswsahrya,venasmeeenlywehnicchodpirnogceasnsdindgesctoedpisnign. cThhaenndeelcocodderinigs
much more complex than the encoder and is the main part of
a channel coding system. So now we will have a closer look
data encode data + decode corrected at the decoding process.</p>
      <p>parity In channel coding the data is processed in blocks of n
bits consisting of k data bits and r = n k parity bits.</p>
      <p>Fig. 4. Encoding, transmission, and decoding of data Because the parity bits are calculated from the data bits, they
are fully dependent on the data bits. Therefore there are only</p>
      <p>However there is a probability that the decoder cannot 2k possible codewords, although there are 2n possible bit
correct an erroneous block. This remaining error probability combinations. When a erroneous codeword is received, the
is called frame error rate (FER) and is dependent on the channel decoder checks which one of the 2k valid codewords
signal-to-noise ratio (SNR), which is the signal power divided is most similar to the received block. This process is called
by the noise power at the receiver. The FER is an indicator maximum likelihood (ML) decoding and is the optimal way of
for the quality of the channel coding system. It is measured decoding [1]. However, because all 2k codewords need to be
with Monte-Carlo simulation on a computer and visualized tested, this procedure is applicable only for small values of k.
with a special chart, shown in Figure 5. This type of chart With increasing k the complexity grows exponentially. Since,
has a typical shape. For weak signals, that is, low SNR, in practice, we deal with pretty large blocks (for example
the FER is very high (close to 1) and drops with increasing k = 100 or k = 10000), the ML decoding procedure of
signal strength. In Figure 5 there are two curves, the blue going through all 2100 or even 210000 codewords is simply
one represents a system without channel coding, the red one not possible to calculate in an imaginable amount of time.
corresponds to the same system with channel coding. One To overcome this problem engineers often use heuristics or
can easily see that the channel coding provides significantly suboptimal algorithms for the decoding procedure. These
alless errors. The lower a curve lies in the performance chart, gorithms are very often capable of finding the correct solution,
the better the coding system works. In fact there are many but with much less complexity. In some cases, however, the
parameters that influence the performance of channel coding, suboptimal algorithms decide for a wrong solution. This leads
but the performance chart allows for a good comparison. to a small degradation in performance, which one has to accept</p>
      <p>In practical applications different error probabilities are if the problem should be feasible.
necessary. For audio signals, like in a mobile, a minimal Only if the complexity is low enough, the decoding
alerror probability 0.01 is required, otherwise crackling or gorithm can be implemented on a chip for use in practical
interruptions make a call uncomfortable. For communications applications. For many of modern codes, algorithms with
through optical fiber error rates of less than 10 12 may be a good tradeoff between performance and complexity are
necessary for flawless operation. known. Moreover, for many used channel codes only the
performance with suboptimal decoding algorithms is known—
not the one of optimal ML decoding.</p>
      <p>From a mathematicans point of view there exist other
means of solving the ML decoding problem than comparing
all 2k codewords to the received one. The decoding can be
formulated as follows.</p>
      <p>Assume that at the receiver a vector ~c was received. It will
check all possible sent codewords c out of C and decide for
the codeword c which was most likely sent. Thus, the ML
codeword c is the codeword which maximizes the so called
conditional probability p(~c receivedjc sent):
c = arg maxc2C p(~c receivedjc sent)</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>This problem is an optimization problem. In the following,
we will present how decoding can be done with methods of
integer optimization.</p>
      <p>II. OPTIMIZATION METHODS FOR ML DECODING</p>
      <p>Mathematical optimization is the task of finding an optimal
solution for and the optimal value of an objective function
under the condition that some constraints are satisfied.
Objective function and side constraints typically model some
real-world scenario, where the variables of the function are
the abstraction of parameters like the per-day work time of
different machines or the path on which a truck drives from
location A to location B. Depending on the task, “optimal”
may be either minimal (if the objective function represents
some sort of cost or time) or maximal (e.g., if the function
value stands for the profit gained). The constraints are used to
model limitations of resources, working time, money, etc., or
to avoid solutions that would be mathematically correct, but
do not make sense in the real world—for example, a machine
cannot work 2 hours per day, and one cannot build 1:72
facilities.</p>
      <p>Linear programming is the best-studied discipline of
mathematical optimization and it deals with optimization of a linear
objective function which is constrained by linear (in)equalities.</p>
      <p>The domain of the variables is Rn for some n 2 N and a linear
program (LP) is then defined as
min (or max) wT x =
such that</p>
      <p>Ax b
x 2 Rn
n
X wi xi
i=1
where w 2 Rn is the cost (benefit) vector, A the m n
constraint matrix, b 2 Rm the right hand side vector of
the constraints, and the variables are real-valued. Linear
programming has become famous since George Dantzig in 1947
proposed an algorithm to solve LPs as the above one, called the
Simplex algorithm. Despite the fact that the Simplex algorithm
was later proven to have exponential worst-case complexity,
it is still the algorithm most used in practice. For the far
most problem instances, the Simplex method is very fast, and
outperforms polynomial-time algorithms that were developed
in the 1980’s. Since its invention, linear programming has
become a standard tool in business for a wide range of
applications such as planning, scheduling, routing or facility
location.</p>
      <p>Yet, this is not the end of the story—for many applications,
linear programs lack some important capabilities, such as the
restriction to integral or binary decision variables.</p>
      <p>From this requests, the field of integer linear programming
evolved. An integer program (IP) is defined in the same way
as a linear program, but additionally requires that some (or
all) of the variables take values in Zn instead of Rn. The
increased power obtained by this extension vastly enlarges the
number of practical problems that can be modelled by IPs—
at the cost, however, of efficiency, since it was shown that
the general integer program is NP-hard to solve. Nevertheless,
since this way of modelling is so powerful, a lot of research
has been done to find and improve algorithms that tackle
integer programs and make them solvable at least for moderate
problem sizes or specific well-understood classes of programs.</p>
      <p>How can integer linear programming techniques be used to
decode binary linear codes and to add some surplus to the
state-of-the-art knowledge in channel coding?</p>
      <p>
        Jon Feldman was one of the first researchers who addressed
this question intensively. To this end, he reformulated the ML
problem (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) and interpreted it as an integer programming
problem: Under the assumption of a memoryless channel,
i.e. the error distributions are independent for subsequent bits,
it can be calculated as
n
Y p(~cijci)
i=1
      </p>
      <p>!
X ln
i:ci=1
p(~cij0)
p(~cij1)
p(~cij0) !
p(~cij1)</p>
      <p>:
c = arg maxc2C</p>
      <p>= arg minc2C</p>
    </sec>
    <sec id="sec-2">
      <title>The quotient</title>
      <p>
        yi = ln
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
is called the log likelihood ratio (LLR) of the i-th input bit.
It only depends on the parameters of the channel, which are
assumed to be known, and the signal ~c that was received.
So, yi is known to the decoder. From the above calculation it
follows that c = arg minc2C (Pin=1 yici).
      </p>
      <p>
        ML decoding can be achieved by minimizing this linear
objective function over all codewords c that satisfy (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ); in
other words,
min
n
X yici
i=1
H c
c 2 f0; 1gn:
      </p>
      <p>Due to the modulo constraints, this is not exactly an integer
programming model of the ML decoding problem. However,
there exist several integer programming re-formulations of this
optimization problem. For example, Jon Feldman introduced
a formulation in which the number of both variables and
constraints grows exponentially in the number of ones per
row [2]. Among others, this exponential growth led us to
the development and investigation of the following improved
integer programming model with less constraints and variables
by the introduction of artificial variables z 2 Zm (one for each
row of H)</p>
      <p>min
The j-th row of the constraint system reads Pn
i=1 Hi;j
ci 2zj = 0. Since zj is required to be integral, this
equation ensures that an even number of entries of ci is set
to 1. So, indeed this is a simple, general, and sparse integer
programming problem for the ML decoding problem [3]. For
any given code specified by a parity check matrix H, the
solution to this problem is the maximum likelihood codeword.</p>
      <p>It should be emphasized that this problem is difficult to
solve (it is NP-hard in general). Therefore, the integrality
requirements on the variables c and z are dropped in order
to get the so-called LP-relaxation of this IP which can be
considered an approximation of the original problem.</p>
      <p>Our IP formulation proved to be beneficial in several ways.</p>
      <p>First, in [4], we solved this IP formulation by a commercial
all-purpose solver and compared its results with those of five
other IP formulations existing in the literature. To sum it up,
our formulation had the fewest number of variables as well as
constraints and the solution time was the least.</p>
      <p>Second, this formulation allows for an efficient procedure
to approximately solve this problem (this is important if
commercial solvers are not efficient enough or cannot be
used due to some constraints). In [3], a separation algorithm
was proposed utilizing the structural properties of this IP
formulation. This algorithm works as follows: First, the LP
solution is computed and checked for integrality. If it is
integral, it is obviously also the optimal IP solution and the
algorithm terminates. Otherwise, the IP-formulation is improved
by adding additional linear constraints T c 0; ( ; 0) 2
where Rn+1 is a set depending on some polyhedral
properties of the code (which shall not be specified due to
intended brevity of this article). However, these constraints
ensure that if the LP obtained by adding this constraint is
solved, then the previous LP solution is infeasible and the
new LP solution is closer to the IP solution. Applying this
idea iteratively yields an algorithm which approximates the IP
solution. Consequently, the LP to be solved repeatedly is of
the form</p>
      <p>Out 2</p>
      <p>
        The constraints T c 0; ( ; 0) 2 , are derived from the
current LP solution based on some combinatorial reasoning
and the indicator variables z allow for a very fast generation of
these constraints. Moreover, these constraints can be classified
with respect to optimization theory: although they are derived
in a combinatorial manner, they correspond to what is known
as Gomory cuts. Additionally, it turned out that the formulation
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) offers another possibility which leads to an algorithm with
enhanced performance: The representation of the code in terms
of a parity check matrix H is not unique. In our separation
algorithm, alternative parity check matrices of the code are
generated during the execution as soon as the current one does
not allow for further improvement of the solution.
      </p>
      <p>To sum it up, this formulation enabled the development of
an algorithm which
is applicable to any binary linear block code in contrast
to existing heuristics which are tailored to one specific
type of code,
yields a better error-correcting performance than other
algorithms, and
employs properties of binary linear codes to improve the
algorithmic treatment.</p>
      <p>Besides this general purpose formulation, we have also
invented and investigated specialized approaches for an
important subclass of codes, as described below.</p>
      <p>III. LINEAR PROGRAMMING FOR TRELLIS-BASED CODES</p>
      <p>A class of codes that have gained tremendeous interest since
their invention in 1993 are the so-called turbo codes. A turbo
code is built by the parallel concatenation of two convolutional
codes that are separated by an interleaver. Convolutional codes
have an inherent graph structure by their trellis representation
which faciliates the processes both of encoding and
decoding. This structure is partly inherited by the more complex
turbo codes, which allowed us to invent a powerful integer
programming formulation, as well as several approximation
algorithms, for this special set of codes.</p>
      <p>The encoder of a convolutional code is built by a number
of delay shift registers (D) and several XOR (binary
addition) gates, as shown in Figure 6. The registers comprise
the memory of the encoder, and the (binary) content of the
registers define its state. Thus, an encoder with k registers has
2k possible states. By the XOR gates, the current state of the
encoder, together with the current input bit, determines both
the two current output bits at Out 1 and Out 2 and new state
for the next step. This process can be visualized by the trellis
graph of the code, where each vertex represents a state at a</p>
      <p>Out 1
Out 2
Out 3
input
specific step, and there are two types of arcs for zero and one
inputs, respectively, that model the transition of one state to
another (see Figure 7 for a trellis with 4 states). One usually
assumes that initially all registers contain a zero, therefore at
the first step there is only one possible state. Arcs representing
a zero input bit are drawn in a dashed style, where solid arcs
correspond to a one at the input.</p>
      <p>
        In a trellis, a codeword corresponds to a path from the initial
state to the left to the end state at the right (which is not
visible in the figure). It is possible to assign a cost attribute
to each arc, based on the LLR values from (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), in a way that
the ML codeword can be found by computing the path of
minimum cost—a task for which very fast algorithms exist.
Therefore, for a simple convolutional code, ML decoding can
be accomplished efficiently.
      </p>
      <p>The encoder of a turbo code uses two convolutional
encoders. The first one processes the input bits as described
above. A copy of the input is sent into the second encoder after
passing an interleaver . The interleaver permutes the bits in a
pseudo-random way before they enter the second encoder. This
way, the input (and thus also the output) is different for both
decoders. By this additional complexity, the theoretical
errorcorrecting performance of a turbo code is vastly increased
compared to single convolutional encoding, as vital research
over the past decades has revealed. The downside of this is
that also the decoding complexity, the effort to compute the
ML codeword for a given received signal, increases. Speaking
in terms of trellis graphs, a turbo code can be modelled by
two trellises which are synchronized by the constraint that in
a given step, the path in the first trellis must use the same type
of arc (zero or one) as the corresponding step in the second
trellis which is determined by the interleaver function. The
ML codeword is now represented by two paths (one for each
trellis) of minimum cost that match this additional constraints
defined by the interleaver. Unfortunatelly, it turns out that this
problem is much harder than the single, unconstrained
mincost path in the case of a convolutional code. However, we
can formulate the synchronized trellises as an integer linear
program, where the interleaver constraints are of the form
X
a2S1i;1
xa =</p>
      <p>X
a2S2;(1i)
xa.</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
Here, S1i;1 are the one-arcs in the first trellis for step i, and
S2;(1i) are the one-arcs in the second trellis for the step to which
the interleaver maps i.
      </p>
      <p>
        We have run numerical simulations, showing that this
formulation leads to greatly reduced computation time compared
to the generic one given in (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ). This is because only a rather
small part of the problem definition, namely the interleaver
constraints (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), separate the problem from the easy problem
of unconstrained shortest path computation. Lagrangian
relaxation can be used in such a situation of several “complicating
constraints”. We have adapted this technique to the case of
turbo codes [5]. The idea of Lagrangian relaxation is to remove
those constraints, but penalize their violation in the objective
function. In the context of turbo coding, this means that we
allow the two paths to use different types of arcs in the steps
that are interconnected by the interleaver, but each of this
violations increases the total cost of the path—the constraint
given in (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) leads to the term
0
in the objective function which becomes larger the more the
two sums differ.
      </p>
      <p>A central result of Lagrangian relaxation is that, for an
appropriate choice of penalty factors i, the optimal paths with
respect to the Lagrangian relaxation will converge to the LP
relaxation of the problem, where all additional constraints are
satisfied. However, in general no integral solution is obtained.</p>
      <p>One of our approaches to finding integral solutions is based
on the following idea: Given the shortest path on one of
the trellises, it is rather unlikely that the corresponding path
on the other trellis (determined by the interleaver) is also
the shortest path for that trellis. However, the optimal global
solution corresponds to the k-th shortest path on one of the
trellises, and under certain assumtions k will be relatively
small k. We have adapted an efficient algorithm to compute
the k shortest paths on a graph to the trellis scenario, and
showed that, for low noise and not too large trellises, this can
be an efficient approach—especially in conjunction with the
modified objective function by the Lagrangian relaxation.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. RESULTS, IMPACT, &amp; OUTLOOK</title>
      <p>Remember that in practical communication systems the use
of an optimal decoder is very often not feasible, because
of its large complexity. So when implementing decoders on
a chip suboptimal algorithms are used. They are much less
complex than the optimal decoder, but also perform worse.
The algorithms are often iterative heuristics, that usually run
e
trr
a
rreo10−3 optimal performance
fream measurdeedcowdietrh our</p>
      <p>WiMAX LDPC Code, block length n= 96
gap
perfhoarrmdawnacree of
implementation
without channel code
suboptimal decoding for WiMAX code
optimal decoding for WiMAX code
1 2 3</p>
      <p>4
signal−to−noise ratio[db]
5
6
7
8
a fixed number of iterations or abort earlier according to
some stopping criterion. For example the decoding of LDPC
codes in practical applications is usually done with a special
iterative algorithm called belief-propagation, that is also used
in artificial intelligence [6].</p>
      <p>Finding decoding algorithms with a good tradeoff between
hardware complexity and communications performance is an
important research area for hardware designers. To judge an
algorithm’s performance it can be compared to the optimal
decoding. This comparison also helps to estimate if further
research on new algorithms is promising or not. However the
problem is that for most linear block codes used in practice
the optimal decoding performance is unknown.</p>
      <p>Our aim is to provide ML simulation results for practical
channel codes with larger blocklengths. Furthermore with
mathematical techniques we like to get further insight in
channel codes and their decoding procedure.</p>
      <p>In chapter I we presented the optimal decoding problem
as integer optimization problem. With help of the presented
techniques we built an optimal decoder, that is able to perform
ML decoding efficiently for larger block lengths. With this
decoder we measured the optimal performance of some short
channel codes used in the recent communication standard
WiMAX [7], which were previously unknown.</p>
      <p>Figure 9 shows a LDPC code taken from WiMAX with
a block lengths of 96 bits. The suboptimal performance as
it may be implemented on a chip is shown. Additionally a
second curve is shown, which was simulated with the use
of our integer optimization decoder and shows the optimal
performance.</p>
      <p>One can see that there is a gap of about 1 db between the
algorithm used in hardware and what is theoretically
achievable. This gap is quite large, if you consider that sometimes
a few tenths of a db are a good sales argument.</p>
      <p>The work on ML decoding as an optimization problem
did not only lead to a better understanding of the decoding
performance of the suboptimal algorithms and new simulation
10−4
10−60
results. The study of different formulations also led to a new
stopping criterion for a iterative decoding algorithm. Stopping
criterions abort the iterative process early, if sufficient
iterations were made. Here we used a cost function to indicate,
when the iterations can be stopped. This leads to a faster
computation and a more power efficient decoding algorithm
in hardware.</p>
      <p>The study of linear block codes gives rise to a lot of
new types of mathematical problems and structures that have
not been considered before, as well as interesting links to
different mathematical fields (like integer programming, graph
and network theory, finite algebra, matroid theory). If we use
integer programming for ML decoding, we are in the situation
that, for a given code, we solve the same program over and
over again, where only the cost vector varies. So far, only
little attention has been given to this special case, which might
allow for more efficient ways to solve the problem by re-using
information that was collected in previous problem instances.
As another example, by studying certain types of cyclic codes
we encountered a new type of network flow problem, where
the supply or demand of vertices is variable and thus a part
of the problem itself, but restricted to be a multiple of 2.
Also, the application of coding theory might arouse interest in
constraints of the form Ax b (mod q) in integer programs,
which have not yet been studied extensively.</p>
      <p>The recently developed 3-dimensional turbo codes, which
are an extension of turbo codes that use a third convolutional
encoder in conjunction with a second interleaver, show that
the development of good Lagrangian relaxation techniques for
trellis-like graphs remains an important task for the area of
convolutional coding.</p>
    </sec>
    <sec id="sec-4">
      <title>ACKNOWLEDGEMENT</title>
      <p>We thank Horst W. Hamacher, Frank Kienle, Stefan Ruzika,
and Norbert Wehn for their constructive comments and
suggestions. We gratefully acknowledge financial support by the
Center of Mathematical and Computational Modelling of the
University of Kaiserslautern.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Costello</surname>
          </string-name>
          , Error Control Coding,
          <string-name>
            <given-names>Second</given-names>
            <surname>Edition</surname>
          </string-name>
          . Upper Saddle River, NJ, USA: Prentice-Hall, Inc.,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Feldman</surname>
          </string-name>
          , “
          <article-title>Decoding error-correcting codes via linear programming</article-title>
          ,
          <source>” Ph.D. dissertation</source>
          , Massachusetts Institute of Technology,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tanatmis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruzika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hamacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Punekar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kienle</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Wehn</surname>
          </string-name>
          , “
          <article-title>A separation algorithm for improved LP-decoding of linear block codes,” Information Theory, IEEE Transactions on</article-title>
          , vol.
          <volume>56</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>3277</fpage>
          -
          <lpage>3289</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tanatmis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruzika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Punekar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Kienle</surname>
          </string-name>
          , “
          <article-title>Numerical Comparison of IP Formulations as ML Decoders,</article-title>
          ” in
          <source>Communications (ICC)</source>
          ,
          <source>2010 IEEE International Conference on. IEEE</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tanatmis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruzika</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Kienle</surname>
          </string-name>
          , “
          <article-title>A lagrangian relaxation based decoding algorithm for LTE turbo codes</article-title>
          ,
          <source>” in Turbo Codes and Iterative Information Processing (ISTC)</source>
          ,
          <source>2010 6th International Symposium on. IEEE</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>369</fpage>
          -
          <lpage>373</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Richardson</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Urbanke</surname>
          </string-name>
          , “
          <article-title>The Renaissance of Gallager's LowDensity Pariy-Check Codes,” IEEE Communications Magazine</article-title>
          , vol.
          <volume>41</volume>
          , pp.
          <fpage>126</fpage>
          -
          <lpage>131</lpage>
          , Aug.
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[7] IEEE 802.16</source>
          ,
          <article-title>Local and metropolitan area networks; Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Systems; Amendment 2:Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Bands</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>