<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Segmenting Sequences Semantically</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Markus Huber</string-name>
          <email>markus.huber@b-tu.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthias Wolff</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Brandenburg University of Technology Cottbus-Senftenberg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>InnoTec21 GmbH</institution>
          ,
          <addr-line>Leipzig</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>139</fpage>
      <lpage>157</lpage>
      <abstract>
        <p>In previous work we presented an extension and generalisation of finite state transducers (FSTs) to so-called Petri net transducers (PNTs). These are applicable to any form of transforming sequential input signals into non-sequential output structures - which can be used to represent the semantics of the input by performing a weighted relation between partial languages, i.e. assigning one weight per related input-output-pair. This paper extends the framework by an additional weighted output where every node of the structure carries a weight. Moreover the output structure induces segments on all intermediate transducers and thus on the input giving a more detailed view on how the single weight emerged during the transformation. We extend the definitions of PNTs, show the resulting changes for the operation of language composition, and discuss the impact on other PNT-algorithms. The theory is accompanied by an example of translating a keyboard input stream into a structural representation of commands to be executed.</p>
      </abstract>
      <kwd-group>
        <kwd>Petri net transducer</kwd>
        <kwd>Weighted labelled partial order</kwd>
        <kwd>Concurrent semiring</kwd>
        <kwd>Natural language understanding</kwd>
        <kwd>Cognitive systems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] we introduced a special form of Petri nets for the weighted translation of partial
languages. We showed how Petri net transducers (PNTs) are a generalisation of finite
state transducers (FSTs) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and discussed several operations including language
composition where a cascade of transducers translates from a source language into a target
language by means of intermediate languages. In [
        <xref ref-type="bibr" rid="ref14 ref15 ref23">23,14,15</xref>
        ] we proposed the use of
PNTs in the field of semantic dialogue modelling for the translation of utterances into
meanings. We first give a brief overview of the area of research within which our current
work is developed.
      </p>
      <p>
        Cognitive systems (including speech dialogue systems) mainly consist of three parts
allowing them to interact with their environments: perceptor, behaviour control, and
actuator [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Of these perceptor and actuator each comprise a hierarchy of transforming
units and bidirectional communication between the particular levels (cf. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]).
      </p>
      <p>The responsibility of the perceptor hierarchy is translating input signals into semantic
representations where the relevant parts from the input are related to semantic categories
relevant to the system. While a low-level signal – e.g. the name Wolfgang Amadeus
Mozart when spoken by a person – is sequential, its semantics is, in general,
nonsequential – although there is an order relation between Wolfgang and Amadeus there is
no order between those and Mozart.</p>
      <p>
        We use the concept of feature-values-relations (FVRs) [
        <xref ref-type="bibr" rid="ref10 ref9">10,9</xref>
        ] as semantic
representations. In figure 1 the semantic structure for the name Wolfgang Amadeus Mozart
can be seen where the semantic categories are depicted as ellipses and the parts from
the signal as rectangles. The order relation between Wolfgang and Amadeus is retained
by the order of their semantic categories. Although the category of middle name exists,
examples as Pippilotta Viktualia Rullgardina Krusmynta Efraimsdotter Långstrump or
Oscar Zoroaster Phadrig Isaac Norman Henkle Emmannuel Ambroise Diggs should
motivate the need for a more general approach to repetition. Note that although the
example obeys a tree-like form, FVRs are neither limited to trees nor need to be single
rooted. Which semantic categories and which parts of an input signal are of relevance
for a concrete system depends on its scope of action. Therefore semantics is clearly
not a function of (only) the input signal and especially not a function of its syntax. By
assigning weights to the nodes of an FVR it is possible to represent the confidence of
each single part of the semantics as opposed to an overall weight.
      </p>
      <p>
        Examples for non-sequential semantic representations are concept relational trees
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which structures are constraint to binary trees, and abstract meaning representation
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and feature structures [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] which can have a more general structure. All of them
are single rooted, do not provide a nice way to preserve order information of repeated
parts, and are computed from syntactical analysis. Furthermore none of them carry
weights on individual nodes and only the last two have assigned an overall weight.
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] uses relational trees where connections between entities relevant for the system
are represented. The authors state that ‘in most cases a relation link is analogous to a
syntactic dependency link’ and again these tree-like structures are single rooted, do not
cover the preservation of order information, and there is only a single weight assigned.
      </p>
      <p>
        We use labelled partial orders (LPOs) to represent FVRs, and PNTs to represent sets
of LPOs and sets of pairs of LPOs. The transducer operation of language composition
allows building hierarchical bidirectional translation systems and since PNTs are a
generalisation of FSTs we can utilise results from existing research in the field of
speech technology [
        <xref ref-type="bibr" rid="ref25 ref8">8,25</xref>
        ] to build on. The resulting transducer cascade allows bottom-up
processing of input signals to output decoded meaning, and top-down processing using
prior knowledge and expectations. Systems as described in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] all have a gap between
syntax and semantics since they first translate input signals into syntactic representations
and do their semantic analysis afterwards. Our approach needs no premature decisions
given name
Wolfgang
      </p>
      <p>name
given name</p>
      <p>Amadeus
surname
Mozart</p>
      <p>SRC/1.00
FILE/0.50
PNTs/0.63
because we directly translate from the signal level into semantic representations. Also
the priming of the perceptor hierarchy by simply appending another transducer which
represents expected semantics is (to our knowledge) a unique feature.</p>
      <p>
        The semantic structure – the output of the perceptor – serves as input to the behaviour
control which computes the reaction of the cognitive system. These reactions also include
interactions with the environment to request additional information for an incomplete
structure. New input is incorporated into the already collected information. In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] several
operations on FVRs are described which admit an algebraic structure on the set of FVRs.
In [
        <xref ref-type="bibr" rid="ref19 ref20">20,19</xref>
        ] a similar approach is described where relational trees are used to represent
the information state of the system and some operations are described using so-called
tree regular expressions. However, there is a complete lack of algebraic investigation.
      </p>
      <p>
        The computed output of the behaviour control is again a semantic structure which
gets translated by the actuator hierarchy into an output signal. According to system
theory the actuator should use the same formalism as the perceptor (cf. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]). Since PNTs
provide the operation of inversion – by swapping the input and output label of each
transition – they constitute a good choice. In contrast to PNTs tree transducers [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and
DAG transducers [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] are both derivation tree oriented and in general not closed under
language composition and inversion. Both do not allow semantic priming in general but
can be used for generation with some reservations.
      </p>
      <p>
        Speech dialogue systems must operate under real-time conditions. There is a period
of a few hundred milliseconds for analysing the input, computing the reaction, and
generating the output. Systems such as the Unified Approach to Signal Synthesis and
Recognition (UASR) [
        <xref ref-type="bibr" rid="ref24 ref8">8,24</xref>
        ] have shown that FSTs can fulfil these requirements. We
are confident that PNTs are also capable of meeting these demands and are currently
developing an embedded cognitive user interface for intuitive interaction with arbitrary
electronic devices [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] where we already applied first results.
      </p>
      <p>
        In previous work we also proposed a variant of PNTs capable of handling output
LPOs of arbitrary width (rhPNTs [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]) which are needed to produce general structures
as in figure 1. For the here presented work we do not use that extension but we will
describe another enhancement of PNTs which is capable of segmenting the input (sensor
signal, speech or some text) into semantic units and assigning meaningful weights –
e.g. likelihoods or probabilities – to these segments. Figure 2 shows a simple example of
an input text and the decoded semantics. The scenario is that a user types the command
cp PNTs ATAES/ on the keyboard trying to copy a file named PNTs to a folder
named ATAED. The input – a sequence of character events – is translated into a semantic
structure for executable commands including correction of the destination of the
copycommand. So the system executes the user’s intention not the command he typed.
      </p>
      <p>
        The paper is organised as follows: In section 2 we recall basic definitions and
introduce in section 3 the used translation formalism, always demonstrating the concepts
on our example. Thereby subsections 3.1 and 3.2 reuse the definitions from [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and
subsection 3.2 extends the definitions from [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In section 4 we introduce the new
notion of segments of weighted LPOs and extend Petri net transducers to make use of
them. Afterwards, we introduce segmenting Petri net transducers fusing the concepts
given so far and finally give a brief conclusion and outlook on future work in section 5.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Mathematical Preliminaries</title>
      <p>By N0 we denote the set of non-negative integers, by N the set of positive integers.</p>
      <p>The set of all multisets over a set X is the set N0X of all functions f : X → N0.
Addition + on multisets is defined by (m + m0)(x) = m(x) + m0(x). The relation ≤
between multisets is defined throughm ≤ m0 :⇔ ∃m00(m + m00 = m0). We write x ∈ m
if m(x) &gt; 0. A set A ⊆ X is identified with the multisetm satisfying (m(x) = 1 ⇔ x ∈
A) ∧ (m(x) = 0 ⇔ x 6∈ A). A multiset m satisfying m(a) &gt; 0 for exactly one element a
we call singleton multiset and denote it by m(a)a.</p>
      <p>Given a binary relation R ⊆ X × Y over the sets X ,Y and sets A ⊆ X and B ⊆ Y , we
denote the image of A by R(A) = {y ∈ Y | ∃x ∈ A : (x, y) ∈ R} and the preimage of B by
R−1(B) = {x ∈ X | ∃b ∈ B : (x, b) ∈ R}. We denote by Dom(R) = R−1(Y ) the domain of
R and call R(X ) the image of R. For X 0 ⊆ X and Y 0 ⊆ Y the restriction of R onto X 0 × Y 0
is denoted by R|X0×Y 0 .</p>
      <p>Given a binary relation R ⊆ X × Y and a binary relation S ⊆ Y × Z for sets X ,Y, Z,
their composition is defined byR ◦ S = {(x, z) | ∃y ∈ Y ((x, y) ∈ R ∧ (y, z) ∈ S)} ⊆ X × Z.
For a binary relation R ⊆ X × X over a set X , we denote R1 = R and Rn = R ◦ Rn−1 for
n ≥ 2. The symbol R+ denotes the transitive closure Sn∈N Rn of R.</p>
      <p>Let A be a finite set called an alphabet. A word over A is a finite sequence of
symbols from A . For a word w its length |w| is the number of its symbols. The symbol ε
denotes the empty word satisfying |ε | = 0 and is the neutral w.r.t. concatenation of words:
wε = ε w = w. By A ∗ we denote the set of all words over A , including the empty word.
A step over A is a multiset over A . A step sequence over A is an element of (N0A )∗.</p>
      <p>A directed graph is a pair G = (V, →), where V is a finiteset of nodes and → ⊆ V ×V
is a binary relation over V , called the set of edges. For a node v ∈ V its preset is the
set •v = →−1({v}) and its postset the set v• = →({v}). A path is a sequence of (not
necessarily distinct) nodes v1 . . . vn (n &gt; 1) such that vi → vi+1 for i = 1, . . . , n − 1. A
path v1 . . . vn is a cycle if v1 = vn. A directed graph is called acyclic if it has no cycles.
For an acyclic directed graph G = (V, →) its maximal nodes is the set Max(G) = {v ∈
V | v• = 0/}, its minimal nodes is the set Min(G) = {v ∈ V | •v = 0/ }. An acyclic directed
graph (V, →0) is an extension of an acyclic directed graph (V, →) if → ⊆ →0.</p>
      <p>An irreflexive partial order over a set V is a binary relation &lt; ⊆ V × V which is
irreflexive ( ∀v ∈ V : v 6&lt; v) and transitive (&lt;=&lt;+). A reflexive partial order over a set V
is a binary relation ≤ ⊆ V × V which is reflexive ( ∀v ∈ V : v ≤ v), transitive (≤ = ≤+)
and antisymmetric (∀v, w ∈ V : v ≤ w ∧ w ≤ v =⇒ v = w). We identify a finite partial
order ∼ over V with the directed graph (V, ∼). Given a partial order po = (V, ∼) we call
two nodes v, v0 ∈ V independent if v 6∼ v0 and v0 6∼ v. By co∼ ⊆ V × V we denote the set
of all pairs of independent nodes of V .</p>
      <p>A monoid is a triple (S, ∗, n) where S is a set, ∗ is a binary closed operation on S
(∀a, b ∈ S : a ∗ b ∈ S), and n is the neutral w.r.t. ∗ (∀a ∈ S : a ∗ n = a = n ∗ a). The operation
is often written by juxtaposition (ab = a ∗ b). If ∗ is idempotent (∀a ∈ S : aa = a) the
monoid is called idempotent. If ∗ is commutative (∀a, b ∈ S : ab = ba) the monoid is
called commutative. If there exists an absorbing element 0 ∈ S (∀a ∈ S : a0 = 0 = 0a)
the monoid is called a monoid with zero. A monoid is called ordered if it is equipped
with a reflexive partial order ≤ such that the operation ∗ is monotone (∀a, b ∈ S : a ≤
b =⇒ (∀c ∈ S : c ∗ a ≤ c ∗ b ∧ a ∗ c ≤ b ∗ c)).</p>
      <p>For a given monoid (S, ∗, n) the operation ∗ defines a binary relation on S via
a ≤∗ b :⇔ ab = b. If this relation is a reflexive partial order, the monoid is called
naturally ordered.</p>
      <p>An idempotent and commutative monoid (S, ∗, n) is naturally ordered. Moreover, if
S is equipped with the reflexive partial order ≤∗, then ∀a, b ∈ S : ab = sup{a, b}, where
the supremum is taken w.r.t. ≤∗.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Input and Output</title>
      <p>We use LPOs extended by weights from an algebraic structure called bisemiring to
represent the input and output structures as depicted in figure 2 and use PNTs by the
means of non-sequential semantics of Petri nets for the weighted translation of LPOs.
3.1</p>
      <sec id="sec-3-1">
        <title>Weighting of Structures</title>
        <p>
          Except for definition 2 this subsection equates the state from [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>A labelled partial order (LPO) over a set X is a 3-tuple (V, &lt;, l), where (V, &lt;) is an
irreflexive partial order and l : V → X is a labelling function on V . In most cases, we
only consider LPOs up to isomorphism, i.e. only the labelling of nodes is of interest, but
not their names. Two LPOs (V, &lt;, l) and (V 0, &lt;0, l0) are isomorphic if there is a bijective
renaming function I : V → V 0 satisfying l(v) = l0(I(v)) and v &lt; w ⇔ I(v) &lt;0 I(w). In
figures, we only show the labels of nodes, and normally omit transitive arrows. If an LPO
lpo is of the form ({v} , 0/, l), then it is called a singleton LPO. A step-wise linear LPO is
an LPO (V, &lt;, l) where the relation co&lt; is transitive. The maximal sets of independent
nodes are called steps. The steps of a step-wise linear LPO are linearly ordered. Thus,
step-wise linear LPOs can be identified with step sequences. Astep-linearisation of an
LPO lpo is a step-wise linear LPO lpo0 which is an extension of lpo.</p>
        <p>The set of series-parallel LPOs (sp-LPOs) is the smallest set of LPOs containing all
singleton LPOs (over a set X ) and being closed under sequential and parallel product.
For two LPOs lpo1 = (V1, &lt;1, l1) and lpo2 = (V2, &lt;2, l2), where V 1 ∩ V2 = 0/ is assumed,
their sequential product is defined by lpo1 ; lpo2 = (V1 ∪ V2, &lt;1 ∪ &lt;2 ∪ V1 × V2, l1 ∪ l2)
and their parallel product is defined bylpo1 k lpo2 = (V1 ∪ V2, &lt;1 ∪ &lt;2, l1 ∪ l2). For an
LPO lpo we denote by SP(lpo) the set of all series-parallel extensions of lpo and by
SPmin(lpo) the set of all minimal series-parallel extensions of lpo in SP(lpo). If lpo is an
extension of lpo0, we write lpo ≤ lpo0.</p>
        <p>A semiring is a quintuple S = (S, ⊕, ⊗, 0, 1), where (S, ⊕, 0) is a commutative
monoid, (S, ⊗, 1) is a monoid with zero where 0 is the absorbing element, and ⊗ (the
Smultiplication) distributes over ⊕ (the S-addition) from both sides. If ⊗ is commutative,
then the semiring is called commutative.</p>
        <p>A bisemiring is a six-tuple S = (S, ⊕, ⊗, , 0, 1), where (S, ⊕, ⊗, 0, 1) is a semiring
and (S, ⊕, , 0, 1) is a commutative semiring. The binary operation on the set S is
called S-parallel multiplication. If ⊕ is idempotent, the bisemiring is called idempotent.</p>
        <p>A concurrent semiring is an idempotent bisemiring (S, ⊕, ⊗, , 0, 1) satisfying
∀a, b, c, d ∈ S : (a
b) ⊗ (c
d) ≤⊕ (a ⊗ c)
(b ⊗ d).</p>
        <p>(CS)
A weighted LPO (wLPO) over an alphabet A and a bisemiring S = (S, ⊕, ⊗, , 0, 1)
is a quadruple (V, &lt;, l, ν ) such that (V, &lt;, l) is an LPO over A and ν : V → S is an
additional weight function. We use all notions introduced for LPOs also for wLPOs.</p>
        <p>The weight of sp-wLPOs is defined in the obvious way. The total weight is computed
from the weights of the nodes through applying ⊗ to the sequential product and to
the parallel product of sub-wLPOs. This is well-defined, since the set of sp-wLPOs as
well as the sub-structure (S, ⊗, ) of a bisemiring (S, ⊕, ⊗, , 0, 1) form an sp-algebra
admitting an sp-algebra homomorphism from the set of sp-wLPOs into the bisemiring.
For an sp-wLPO wlpo its weight is denoted by ω (wlpo).</p>
        <p>The weight for general wLPOs is computed as the sum of the weights of all its
series-parallel extensions. Condition (CS) ensures that less restrictive wLPOs yield
bigger weights. So in the case of using concurrent semirings it suffices to consider only
minimal series-parallel extensions.</p>
        <p>
          Definition 1 (sp-Weight of wLPOs). Let wlpo = (V, &lt;, l, ν ) be a wLPO over a
concurrent semiring. Then its sp-weight is defined byω sp(wlpo) = ⊕wlpo0∈SPmin(wlpo) ω (wlpo0).
Definition 2 (Sum of wLPOs). Let wlpo = (V, &lt;, l, ν ) and wlpo0 = (V 0, &lt;0, l0, ν 0) be
two wLPOs over the bisemiring S = (S, ⊕, ⊗, , 0, 1) having isomorphic underlying
LPOs and let I : V → V 0 be the corresponding bijective renaming function. Then their
sum is defined bywlpo ⊕ wlpo0 = (V, &lt;, l, ν ⊕ (ν 0 ◦ I)), where the sum of the weight
functions is declared pointwise.
We use the concept of transducers to translate between LPOs by augmenting weighted
transitions with input and output symbols. A run is represented as a wLPO over the
transitions which can then be projected onto the input resp. output symbols. This
subsection equates the states from [
          <xref ref-type="bibr" rid="ref11 ref16">16,11</xref>
          ] and extends the notions from [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] with weights.
Additionally, definitions 5, 7 and 8 provide formalisations of some concepts from [
          <xref ref-type="bibr" rid="ref11 ref16">16,11</xref>
          ].
        </p>
        <p>A place/transition Petri net (PT-net) is a 4-tuple N = (P, T, F,W ), where P is a finite
set of places, T is a finite set oftransitions disjoint from P, F ⊆ (P × T ) ∪ (T × P) is
the flow relation and W : (P × T ) ∪ (T × P) → N0 is a flow weight function satisfying
W (x, y) &gt; 0 ⇔ (x, y) ∈ F . A marking of a PT-net assigns to each place p ∈ P a number
m(p) ∈ N0 of tokens, i.e. a marking is a multiset over P representing a distributed
state. A marked PT-net is a PT-net N = (P, T, F,W ) together with an initial marking
m0. For (transition) steps τ over T we introduce the two multisets of places •τ (p) =
∑t∈T τ (t)W (p, t) and τ •(p) = ∑t∈T τ (t)W (t, p). A transition step τ can occur in m if
m ≥ •τ . If τ occurs in m, the resulting marking m0 is defined bym0 = m − •τ + τ •. We
write m →−τ m0 to denote that τ can occur in m and its occurrence leads to m0. A step
execution in m is a finite step sequence over T τ1 . . . τn such that there are markings
m1, . . . , mn with m −τ→1 m1 −→2 . . . −τ→n mn.</p>
        <p>τ</p>
        <p>We use LPOs over T to represent single non-sequential runs of PT-nets, i.e. the labels
of an LPO represent transition occurrences. For a marked PT-net N = (P, T, F,W, m0)
an LPO lpo = (V, &lt;, l) over T is an LPO-run if each step-linearisation of lpo is a step
execution of N in m0. If an LPO-run lpo = (V, &lt;, l) occurs in a marking m, the resulting
marking m0 is defined bym0 = m − ∑v∈V •l(v) + ∑v∈V l(v)•. We denote the occurrence
lpo
of an LPO-run lpo by m −→ m0.</p>
        <p>A Petri Net Transducer is a PT-net where each transition is augmented with an input
and an output symbol and additionally carries a weight drawn from a bisemiring.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Definition 3 (Petri Net Transducer). A Petri Net Transducer (PNT) over a bisemiring</title>
        <p>S = (S, ⊕, ⊗, , 0, 1) is defined as a tupleN = (P, T, F,W, pI , pF , Σ , σ , Δ , δ , ω ), where
– (P, T, F,W, m0) with m0 = pI is a marked PT-net (called the underlying PT-net),
pI ∈ P is the source place satisfying • pI = 0/ and pF ∈ P is the sink place satisfying
pF • = 0/,
– Σ is a set of input symbols and σ : T → Σ ∪ {ε } is the input mapping,
– Δ is a set of output symbols and δ : T → Δ ∪ {ε } is the output mapping and
– ω : T → S is the weight function.</p>
        <p>A wLPO wlpo = (V, &lt;, l, ν ) over T is a wLPO-run of N
lpo
– if the underlying LPO lpo = (V, &lt;, l) is an LPO-run of N with pI −→ pF and
– if ν (v) = ω (l(v))) holds for all v ∈ V .
We denote by wLPO(N) the set of all wLPO-runs of N.</p>
        <p>A PNT can be used to translate a partial language into another partial language, relating
so-called input words to so-called output words. Input and output words are defined
as LPOs (V, &lt;, l) with a labelling function l : V → A ∪ {ε } for some input or output
alphabet A . Such LPOs we call ε -LPOs. For each ε -LPO (V, &lt;, l) we construct the
corresponding ε -free LPO (W, &lt;|W ×W , l|W ), W = V \ l−1(ε ) by deleting ε -labelled
nodes together with their adjacent edges. Since partial orders are transitive, this does not
change the order between the remaining nodes.</p>
        <p>Definition 4 (Input and Output Labels of Runs). Let N = (P, T, F,W, pI , pF , Σ , σ , Δ ,
δ , ω ) be a PNT and let wlpo = (V, &lt;, l, ν ) ∈ wLPO(N). The input label of wlpo is the
LPO σ (wlpo) corresponding to the ε -LPO (V, &lt;, σ ◦ l). The output label of wlpo is the
LPO δ (wlpo) corresponding to the ε -LPO (V, &lt;, δ ◦ l).</p>
        <p>For LPOs u over Σ and v over Δ , we denote by wLPO(N, u) the subset of all wLPOs
wlpo from wLPO(N) with input label σ (wlpo) = u, and by wLPO(N, u, v) the subset of
all wLPOs from wLPO(N, u) with output label δ (wlpo) = v.</p>
        <p>The input language of a PNT is the set of all input labels of its weighted LPO-runs.
Its elements are called input words. Output language and output words are defined
analogously.</p>
        <p>
          A PNT assigns weights to all pairs of LPOs u over Σ and v over Δ based on the
weights of its wLPO-runs (cf. [
          <xref ref-type="bibr" rid="ref13 ref16">16,13</xref>
          ]). Concerning the semantics, only the input output
behaviour of PNTs is relevant. Since transitions also may have empty input and/or empty
output, there are always (infinitely) many PNTs having the same semantics. For practical
application, such PNTs are equivalent (cf. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]).
        </p>
        <p>For our example the PNT Nw generating the output word cp PNTs ATAED/ is shown
in the upper part of figure 4 on the following page. Note that the symbols themselves are
sequences of symbols and the typo of the user was corrected. This was possible since the
system already did several translations of the original input sequence. First the sequence
was translated into a set of sequences where every input symbol created alternatives
according to the keyboard layout. If you type an S on a QWERTZ keyboard it could be
that your intention was to type a D since the two keys are neighboured. Figure 5 on the
next page shows a PNT which realises this correction for the letter S. The next step was
translating from the many sequences of characters into sequences of known words. The
system could filter out many sequences since it knows only about some commands, files
and folders. Since there was no folder named ATAES but there exists a folder ATAED,
the shown sequence of words was generated. The PNT No in the lower part translates
between the two words depicted below of it. Since there is a filePNTs and also a folder
with the same name, the weight for FILE is also lesser than 1.</p>
        <p>Note that the input and output languages of a PNT N are extension closed, since
wLPO(N) is extension closed. This allows No from figure 4 to also accept the two
steplinearisations cp PNTs ATAED/ and cp ATAED/ PNTs as input. Indeed these would
be translated into step-linearisations of the depicted output word with an additional edge
between PNTs and ATAED.</p>
        <p>We now introduce the central transducer composition operation of language
composition. It allows to put several transducers in a row to translate from the input of the
Nw
No
1
1
cp
ε:cp/0.81 ε:PNTs/0.63ε:ATAED//0.01
cp</p>
        <p>PNTs ATAED/
first one to the output of the last one utilising the intermediate ones. We consider a fixed
concurrent semiring S = (S, ⊕, ⊗, , 0, 1) and a PNT Na over S with input alphabet
Σa and output alphabet Δa and a PNT Nb over S with input alphabet Σb = Δa and output
alphabet Δb. We assume Na and Nb to be disjoint.</p>
        <p>We define a binary relation$ ⊆ Ta × Tb on the Cartesian product of the transitions
from Na and Nb by ta $ tb :⇔ δa(ta) = σb(tb) and say that ta and tb build a composable
pair. Any transitions ta ∈ Ta with empty output (δa(ta) = ε ) and tb ∈ Tb with empty
input (σb(tb) = ε ) are called non-invasive. Any invasive transition which is not part of a
composable pair is said to be futile.</p>
        <p>The construction corresponds to the parallel product of Na and Nb and merging each
composable pair of transitions (ta, tb) to a new transition with input symbol σa(ta) and
output symbol δb(tb), weight ωa(ta) ωb(tb) and connections •ta + •tb and ta• + tb•.
Moreover, we keep all non-invasive transitions of Na and Nb unchanged, and omit all
other transitions of Na or Nb, i.e. all futile transitions.</p>
        <p>Figure 6 on the following page shows the result of the language composition of Nw
and No from figure 4. The nodes filled with dots establish the parallel product, the greyed
transitions are the result of merging transitions from composable pairs, and all white
transitions are non-invasive ones from No. The greyed nodes also represent Nw inside
the result.</p>
        <p>To provide a formal definition of language composition we set fori ∈ {a, b}
– for the construction of the parallel product of Na and Nb
• P = {p◦I , p◦F } as the new source and sink places,</p>
        <p>k
• Tk = {t◦I , t◦F } as the splitting and joining transitions,
1
S:S/0.88</p>
        <p>S:A/0.02</p>
        <p>S:W/0.02</p>
        <p>S:E/0.02</p>
        <p>S:D/0.02</p>
        <p>S:X/0.02</p>
        <p>S:Y/0.02
cp
SRC
FILE</p>
        <p>PNTs
ε:CMD/1.00
ε:cp/0.81
ε:SRC/1.00</p>
        <p>Definition 5 (Language Composition). Let Na = (Pa, Ta, Fa,Wa, pI , pF , Σa, σa, Δa, δa,
ωa) and Nb = (Pb, Tb, Fb,Wb, pI , pF , Σb, σb, Δb, δb, ωb) with Δa = Σb be two PNTs over
the same concurrent semiring S = (S, ⊕, ⊗, , 0, 1). Then, using the notations from
above, the language composition Na ◦ Nb is the PNT N = (P, T, F,W, pI , pF , Σ , σ , Δ , δ ,
ω ) over S , where
– F = Fk ∪ Fam ∪ Fbm ∪ Fan ∪ Fbn,
– P = Pk ∪ Pa ∪ Pb and T = Tk ∪ T m ∪ Tan ∪ Tbn,
– W |Fk ≡ 1, W |Fam ≡ Wam, W |Fbm ≡ Wbm, W |Fan ≡ Wa, W |Fbn ≡ Wb,
– pI = p◦I and pF = p◦F ,
– Σ = Σa and Δ = Δb,
– σ |Tk ≡ ε , σ |T m ≡ σa ◦ πa, σ |Tan ≡ σa, σ |Tbn ≡ ε ,
– δ |Tk ≡ ε , δ |T m ≡ δb ◦ πb, δ |Tan ≡ ε , δ |Tbn ≡ δb,
– ω |Tk ≡ 1, ω |T m ≡ ω m, ω |Tan ≡ ωa and ω |Tbn ≡ ωb.
Since input and output languages of PNTs are extension closed it is possible that the
operation of language composition propagates dependencies from one PNT to the other.
Consider two PNTs Na and Nb like in the above definition, an output wordu0 of Na, an
input word u of Nb with u0 ≤ u and the output word v into which Nb translates u. Then
Nb also accepts u0 as input word and translates it into an output word v0 ≤ v.</p>
        <p>This conjuncture can also be seen on the output word in figure 6 where the language
composition took over the order between PNTs and ATAED/ from Nw whereas the
corresponding transitions from No were unordered.</p>
        <p>
          The problem is discussed in more detail in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] where solutions based on separating
input and output processing are proposed. All those solutions have in common that the
processing of weights, i.e. merging of transitions, no longer occurs where the output is
produced. To overcome this we use another approach from [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] namely Hierarchical
Petri Net Transducers. We extend them by augmenting the transitions with weights
drawn from a bisemiring and introduce the additional concept of shared transitions.
        </p>
        <p>A Hierarchical Petri Net Transducer is a PNT together with a refinement operation to
substitute transitions by other PNTs. This induces sets of transitions which can be used
to eliminate some edges from the input and output labels of runs by restricting the order
relation. With this post processing transitions can be isolated from one another solving
the problem of propagating unwanted order during language composition operation.
Definition 6 (Hierarchical Petri Net Transducer). For a given PNT N0 = (P0, T0, F0,
W0, pI , pF , Σ0, σ0, Δ0, δ0, ω0) over a bisemiring S = (S, ⊕, ⊗, , 0, 1) a hierarchical
PNT (hPNT) over S is a 6-tuple H = (N0, N , ρ , Fc,Wc, Ts), where
– N0 is the initial PNT,
– N = {N1, . . . , Nk} is a family of refinement PNTsover S whereat N0, N1, . . . , Nk
are pairwise disjoint,
– ρ : T0 → {1, . . . , k} is a partial refinement function which is injective (ρ (t) =
ρ (t0) =⇒ t = t0) and associates transitions from the initial PNT with PNTs from N .
A transition t ∈ T0, if t ∈ Dom(ρ ) holds, is refined by the PNTNρ(t). Any transition
t ∈ T0 for which t 6∈ Dom(ρ ) holds is called simple. Any simple transition t must
have empty input and output σ0(t) = ε = δ0(t) as well as neutral weight ω0(t) = 1.
– Fc ⊆ (P0 × Sik=1 Ti) ∪ (Sik=1 Ti × P0) is the crossing flow relation which allows to
connect transitions from refinement PNTs to places from the initial PNT,
– Wc is the corresponding crossing flow weight function analogous to PT-nets and
– Ts ⊆ T0 is the set of shared transitions which allow to relax the isolation of
transitions.</p>
        <p>The crossing components are initially empty and only needed for holding information
when computing the language composition of a PNT and an hPNT where the result is
again an hPNT (see definition 8). The same holds for the set of shared transitions.</p>
        <p>In figure 7 on the next page one can see the hPNTHo where the transitions t1, t2, and
t3 are refined by the PNTsN1 resp. N2 resp. N3.</p>
        <p>
          The refinement operation is defined in an obvious way. Any non-simple transition is
substituted with two new ε -transitions with neutral weight 1. One is called down-going
ε:SRC/1.00
ε:FILE/0.50
t3
t2
2
N2
refinement transitionand only inherits the incoming arcs and the other one is called
up-going refinement transitionand inherits only the outgoing arcs. The down-going
refinement transition is connected to the source place of the refinement PNT and the
up-going refinement transition with its sink place. For the formal procedure of refining
we refer to [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The result of refining is a PNTN called the interface of H and the set
of runs of N is per definition the set of runs ofH. We use all other notations introduced
for PNTs as well as definition 4 also for hPNTs by means of their interfaces.
        </p>
        <p>The hPNT Ho in figure 7 has the same input and output words as the PNTNo from
figure 4 since the interface ofHo is equivalent to No.</p>
        <p>But hierarchical PNTs have additional input and output labels where the order
relation is restricted. Dependencies between different refinement PNTs are excluded
except for shared transitions. Dependencies between the initial PNT and refinement
PNTs other than by the refinement itself and through shared transitions are excluded as
well. We focus on the output labels.</p>
        <p>Figure 8 shows an output word of Ho where the nodes are grouped to visualise the
allowed connections. The narrower lines represent the refinement paths.</p>
        <p>
          We use the notations from [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] where •T and T • are the sets of down-going resp.
upgoing refinement transitions and setTr = •T ∪ T • to be the set of all refinement transitions.
We use K = ρ (T0) as the image of ρ and K0 = K ∪ {0}. Then we have r = Si∈K0 l−1(Ti ∪
Tr ∪ Ts) × l−1(Ti ∪ Tr ∪ Ts) as the set of all allowed relations.
        </p>
        <p>Definition 7 (Relaxed Output Label of Runs). Let H = (N0, N , ρ , Fc,Wc) be a hPNT
and let wlpo = (V, &lt;, l, ν ) ∈ wLPO(H). The relaxed output label of wlpo is the LPO
δr(wlpo) corresponding to the ε -LPO (V, &lt;|r+, δ ◦ l).</p>
        <p>For LPOs u over Σ and v over Δ , we denote by wLPOO,r(H, u, v) the subset of all
wLPOs wlpo from wLPO(H, u) with relaxed output label δr(wlpo) = v.
We consider a fixed concurrent semiringS and a PNT Na over S with input alphabet Σa
and output alphabet Δa and an hPNT H over S with its interface Nb and input alphabet
Σb = Δa and output alphabet Δb. We assume Na and Nb to be disjoint.
N1
ε:CMD/1.00</p>
        <p>CMD
cp</p>
        <p>DST
SRC</p>
        <p>DIR</p>
        <p>ATAED
FILE</p>
        <p>PNTs</p>
        <p>Using the notations introduced before definition 5 as well as the notations from
above of definition 7 we additionally set
– the right partial language composition Na |◦ Nb to be the PNT N = (P, T, F,W, pI , pF ,
Σ , σ , Δ , δ , ω ) with
• P = Pb and T = T m ∪ Tbn,
• F = F m</p>
        <p>b ∪ Fbn,
• W |Fbm ≡ Wbm, W |Fbn ≡ Wb,
• pI = pIb and pF = pFb ,
• Σ = Σa and Δ = Δb,
• σ |T m ≡ σa ◦ πa, σ |Tbn ≡ ε ,
• δ |T m ≡ δb ◦ πb, δ |Tbn ≡ δb,
• ω |T m ≡ ω m, ω |Tbn ≡ ωb and
– the remaining flow relation F (Na |◦ Nb) = Fam as the non-invasive flow relation of Na.
Definition 8 (Hierarchical Language Composition). Let Na = (Pa, Ta, Fa,Wa, pI , pF ,
Σa, σa, Δa, δa, ωa) be a PNT over the concurrent semiring S = (S, ⊕, ⊗, , 0, 1), H be
an hPNT over S , and Nb be the interface of H with Δa = Σb. Then, using the notations
from above, the hierarchical language composition Na ◦ H is the hPNT H0 = (N00 , N 0,
ρ 0, Fc0 ,Wc0 ) where
– N00 = Na ◦ N0,
–</p>
        <p>N 0 = {N10 , . . . , Nk0 } where Ni0 = Na |◦ Ni for every i = 1, . . . , k,
– ρ 0 = ρ ,
– Fc0 = Fc ∪ Si=1,...,k F (Na |◦ Ni),
– Ts0 = Ts ∪ Tan.</p>
        <p>– Wc0 |Fc ≡ Wc, Wc0 |F(Na|◦Ni) ≡ Wa|F(Na|◦Ni) for every i = 1, . . . , k and
The hierarchical language composition corresponds to the language composition of the
PNT and the interface of the hPNT. Technically, the above definition only states where
to put the new places and transitions into.</p>
        <p>Consider now the language composition Nw ◦ Ho. The output word of the result
would be the same as for Nw ◦ No depicted in figure 6 but the relaxed output word would
be the one from figure 8 where the order betweenPNTs and ATAED is restricted away.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Adding and Spreading the Weight</title>
      <p>The introduced framework allows for weighted relations between input and output but
neither are weighted objects. After processing there is no way to see which part of a run
had what impact on the result. Since we want to have a more detailed view of the weight
we want to relate an input to a weighted output. Therefore we propose an additional
definition of weighted output labels of runs for PNTs where the weight function is
recomputed during the deletion of ε -labelled nodes. For this recomputation a wLPO
is segmented by its labels. This also has influence on the definition of equivalence for
PNTs and results in technical restrictions for PNT-algorithms. Language composition of
several PNTs can propagate the segments from one end to the other. The application to
hPNTs leads to Segmenting Petri Net Transducers. These new ideas are introduced in
two steps in the next subsections.</p>
      <sec id="sec-4-1">
        <title>4.1 Segments of Weighted Structures</title>
        <p>A wLPO is segmented by means of its labels. Every segment contains only one
non-εlabelled node but ε-labelled nodes can be part of more than one segment. The weight of
a single segment is computed as its sp-weight.</p>
        <p>For a given ε-wLPO wlpo = (V, &lt;, l, ν) and for any subset V 0 ⊆ V of nodes we set
– V6ε0 = {v ∈ V 0 | l(v) 6= ε)} to be the set of all non-ε-labelled nodes from V 0,
– VVε0 = V 0 ∪ {v ∈ V \ V6ε | ∃v0 ∈ V 0 : v &lt; v0} to be the set of all nodes from V 0 and all
their ε-labelled predecessors,
– VV 0 = V 0 ∪ {v ∈ V | ∃v0 ∈ V 0 : v &lt; v0} to be the set of all nodes from V 0 and all their
predecessors,
– poV 0 = (V 0, &lt;|V 0×V 0 ) to be the corresponding partial order on V 0,
– Min(V 0) = Min(poV 0 ) to be the set of all minimal nodes from V 0 and
– Max(V 0) = Max(poV 0 ) to be the set of all maximal nodes from V 0.</p>
        <p>Definition 9 (Segments of ε-wLPOs). Let wlpo = (V, &lt;, l, ν) be an ε-wLPO. Let N be
a set. Then, using the notations from above, we compute the segments seg(wlpo) of wlpo
with the following procedure:
1. Initialise N with all nodes from V .
2. Let w = (N, &lt;|N×N , l|N , ν|N ) be the current ε-wLPO.
3. For every minimal non-ε-labelled node l ∈ Min(N6ε ) put Nεl into seg(wlpo).
{ }
4. Set N to be N \ NMin(N6ε ).
5. If N6ε 6= 0/ then continue with step 2.
6. Put N into seg(wlpo).</p>
        <p>For a non-ε-labelled node v ∈ V6ε we say that the wLPO seg(v) = (Sv, &lt;|Sv×Sv , l|Sv , ν|Sv ),
where Sv is the one set from seg(wlpo) with Max(Sv) = {v}, is the segment of v. The set
from step 6 is called ε-segment of wlpo.</p>
        <p>For an ε-wLPO (V, &lt;, l, ν) we construct the corresponding ε-free wLPO (Vs, &lt;s, ls, νs),
where (Vs, &lt;s, ls) is the corresponding ε-free LPO, and the weight function is defined by
νs(v) = ωsp(seg(v)) for all nodes v ∈ Vs.</p>
        <p>Figure 9 on the following page depicts on the left an ε-wLPO which is a
wLPOrun of No projected on its input symbols. Here the nodes are grouped to visualise the
segments of the wLPO. On the right the corresponding ε-free wLPO is shown.</p>
        <p>Note that for an ε-wLPO wlpo and its corresponding ε-free wLPO wlpos in general
ωsp(wlpo) 6= ωsp(wlpos) holds (even if wlpo is series-parallel) since the weight of the
ε-segment is not considered on the right side – which can be seen also in figure 9 – but
the weight of some nodes is possibly taken into account multiple times.
ε/1.00
cp/1.00 PNTs/0.50
With the introduced notion of segments we extend the definitions for PNTs focusing on
the output side. Extension of the input side is subject to further research. Eventually we
introduce Segmenting Petri Net Transducers as hPNTs using the new concepts.</p>
        <p>While the runs of PNTs are wLPOs, the input and output words of PNTs are
unweighted LPOs. We add another output label resulting in weighted output words.
Definition 4 (add.) (Weighted Output Label of Runs). Let N = (P, T, F,W, pI , pF , Σ ,
σ , Δ , δ , ω ) be a PNT over a concurrent semiring S = (S, ⊕, ⊗, , 0, 1) and let wlpo =
(V, &lt;, l, ν ) ∈ wLPO(N). The weighted output label of wlpo is the wLPO δω (wlpo)
corresponding to the ε -wLPO (V, &lt;, δ ◦ l, ν ).</p>
        <p>If different wLPO-runs result in the same output word, i.e. have isomorphic underlying
LPOs, then we build the sum of all their corresponding ε -free wLPOs.</p>
        <p>Definition 10 (Weighted Output of PNTs). Let N = (P, T, F,W, pI , pF , Σ , σ , Δ , δ , ω )
be a PNT over a concurrent semiring S = (S, ⊕, ⊗, , 0, 1), u be an LPO over Σ and v
be an LPO over Δ . The weighted output NO,ω (u, v) is defined by</p>
        <p>NO,ω (u, v) =</p>
        <p>⊕
wlpo∈wLPO(N,u,v)
δω (wlpo).</p>
        <p>We set NO,ω (u, v) = (0/ , 0/, l, ν ) if wLPO(N, u, v) = 0/ .</p>
        <p>
          This definition favours the definition of the output weight of PNTs – the weight which is
assigned to an input-output-pair (see definition 5 of [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]).
        </p>
        <p>The weighted output language of a PNT is the union of all its weighted output and
its elements are called weighted output words.</p>
        <p>
          The additional semantics of PNTs should also be considered in the concept of
equivalence (see definition 6 of [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]) to reflect the notion of segments. This would have
a severe impact on algorithms for PNTs. Any modification of weights, input or output
symbols would have to respect segments. Also ε -segments should be observed since it is
not obvious how they interfere with the sequential product of PNTs.
        </p>
        <p>Language composition of PNTs can propagate segments from one PNT to another.
Consider a segment of the upper PNT from figure 10. It originates from one transition
carrying a non-ε output symbol and a (probably empty) set of transitions with ε output
symbol. All transitions can have ε or non-ε input symbols. During language composition
with the lower PNT from figure 10, transitions with non-ε input symbols can get merged
not changing their output symbols, or transitions from the other PNT having ε output
symbols can be added. This way transitions from the lower PNT join the segments
induced by the upper one. Additionally, their weight is incorporated into the segments.
ε:c</p>
        <p>N:ε
E:ε
ε:p</p>
        <p>D:ε
ε:
s:PNTs
p:cp
/:ATAED</p>
        <p>:ε
ε:P
ε:N
ε:T
ε:s
...</p>
        <p>But segments can also be reduced or even deleted when transitions with non-ε input
symbols are not carried over to the resulting PNT. Nevertheless, segments from one PNT
induce a segmentation of another PNT through language composition and in general
this segmentation will be different than the PNT’s own one. This even raises the bar for
algorithms since segments are subject to change depending on the PNT’s environment.</p>
        <p>Now we are able to translate an input signal into a weighted output structure. To
address the problem of unwanted propagated order, we combine the notion of weighted
output with hPNTs. Therefore the algorithm from definition 9 runs on relaxed output
labels and is adapted in step 3 to only consider nodes originating from the same set
of transitions or from the set of shared transitions. The modified algorithm leads to
hierarchical correspondence of ε -wLPOs and ε -free wLPOs.</p>
        <p>Definition 7 (add.) (Relaxed Weighted Output Label of Runs). Let H = (N0, N , ρ ,
Fc,Wc, Ts) be an hPNT over a concurrent semiring S and let wlpo = (V, &lt;, l, ν ) ∈
wLPO(H). The relaxed weighted output label of wlpo is the wLPO δr,ω (wlpo)
hierarchical corresponding to the ε -wLPO (V, &lt;|r+, δ ◦ l, ν ).</p>
        <p>This time we build the sum of all hierarchical corresponding ε -free wLPOs of wLPO-runs
leading to the same relaxed output word.</p>
        <p>Definition 11 (Relaxed Weighted Output of sPNTs). Let H = (N0, N , ρ , Fc,Wc, Ts)
be an hPNT over a concurrent semiring S = (S, ⊕, ⊗, , 0, 1), u be an LPO over Σ and
v be an LPO over Δ . The relaxed weighted output HO,r,ω (u, v) is defined by
HO,r,ω (u, v) =</p>
        <p>⊕
wlpo∈wLPOO,r(H,u,v)
δr,ω (wlpo).</p>
        <p>We set HO,r,ω (u, v) = (0/ , 0/, l, ν ) if wLPOO,r(u, v) = 0/.</p>
        <p>Finally, we call hPNTs with shared transitions and the above extensions segmenting Petri
net transducers or sPNTs for short.</p>
        <p>A cascade of PNTs with an sPNT on top allows for a seamless propagation of
information in both directions. An additional PNT on top representing semantic expectation
leads to adjustments through the hierarchy down to the lowest level priming the whole
recognition network. An incoming signal leads to adjustments up to the highest level
resulting in weighted semantic structures directly derived from the input.</p>
        <p>
          Since sPNTs still use terminal languages (cf. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]) there is no added expressiveness
compared to PNTs. Though a recursive refinement operation as introduced in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] adds
expressiveness it prohibits a closed representation of the interface thus making analysis
much harder. When using clean transducers (see [
          <xref ref-type="bibr" rid="ref11 ref16">16,11</xref>
          ]) the closedness regarding
composition operations still holds. It is our belief, that sPNTs have a bigger expressiveness
concerning weighted output (after language composition) than PNTs since weights can
now be multiplied without taking over dependencies.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Outlook</title>
      <p>We introduced segmenting Petri net transducers for the translation of sequential input
into non-sequential weighted output and the segmentation of the input on the basis of
semantic structures. We showed how to compute segments on wLPOs and how segments
are propagated during language composition.</p>
      <p>State-of-the-art speech dialogue systems decide for an interpretation of an input
signal based on a single overall weight assigned to the whole translation process.
Segmenting allows using parts of the signal which are recognised well and initiating an
enquiry call for the other parts. The notion of segments and their propagation also gives a
nice solution to the problem of alignment from which tree transducers, DAG transducers,
and other rewriting systems suffer.</p>
      <p>The definition of equivalence for sPNTs and the combination with weighted input
are subject to further research as well as analysis of algebraic properties of sPNTs and
concrete technical restrictions for PNT-algorithms.</p>
      <p>Most open questions arise from the fact that a single run is used to represent input
and output. Another idea is to use a pair of loosely synced runs – one for input and one
for output. This way propagation of dependencies could be completely circumvented in
both directions. Namely the PNTs generating the semantic structure from our example
force a prefix notation on the typed command and neither postfix nor infix notation
would be accepted. However, such restrictions should be subject to the syntactic level –
another PNT within the cascade.</p>
      <p>
        Also notions from [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], namely the dependency of the flow weight function on
the marking, could be used to eliminate the need for a recursive refinement operation.
Regarding the mentioned real-time capabilities we plan to adapt on-the-fly composition
algorithms for FSTs [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to precomputed unfoldings of PNTs.
      </p>
      <p>Future work will also deal with the computation of confidence scores, rather than
weights, for the nodes of the perceived LPOs. To this end we want to create a hierarchy
of semantic units leading to a more accurate computation of weights. These can be used
to assess the reliability of semantic units of the input thus enhancing decisions made by
the behaviour controller of cognitive systems.</p>
      <p>Acknowledgements This work has been developed in the project Universal Cognitive
User Interface (UCUI) which is partly funded by the German Federal Ministry of
Education and Research (BMBF) within the research program IKT2020 (grant #16ES0297).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Chulan</surname>
            ,
            <given-names>U.A.U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sulaiman</surname>
            ,
            <given-names>M.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mahmod</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selamat</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamid</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>Organizing the semantics of text with the concept relational tree</article-title>
          .
          <source>IJCSNS</source>
          <volume>8</volume>
          (
          <issue>9</issue>
          ),
          <volume>236</volume>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Droste</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuich</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vogler</surname>
          </string-name>
          , H. (eds.):
          <source>Handbook of Weighted Automata. Monographs in Theoretical Computer Science</source>
          , Springer (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Duckhorn</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
          </string-name>
          , M., Meyer, W.,
          <string-name>
            <surname>Jokisch</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tschöpe</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards an autarkic embedded cognitive user interface</article-title>
          .
          <source>In: Interspeech</source>
          <year>2017</year>
          , 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden,
          <source>August 20-24</source>
          ,
          <year>2017</year>
          . ISCA (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Duckhorn</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoffmann</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A new epsilon filter for efficient composition of weighted finite-state transducers</article-title>
          . In: Cosi,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.D.</given-names>
            ,
            <surname>Fabbrizio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.D.</given-names>
            ,
            <surname>Pieraccini</surname>
          </string-name>
          ,
          <string-name>
            <surname>R</surname>
          </string-name>
          . (eds.)
          <source>Interspeech</source>
          <year>2011</year>
          , 12th Annual Conference of the International Speech Communication Association, Florence, Italy,
          <source>August 27-31</source>
          ,
          <year>2011</year>
          . ISCA (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Geßler</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Kognitive Gerätesteuerung.
          <source>Master's thesis</source>
          , Brandenburgische Technische Universität Cottbus-Senftenberg,
          <source>Deutschland</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hack</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Petri net languages</article-title>
          .
          <source>Tech. Rep. Memo</source>
          <volume>124</volume>
          , computation structures group, massachusetts institute of technology (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Cognitive Dynamic Systems: Perception-action Cycle, Radar and Radio</article-title>
          . Cambridge University Press, New York, NY, USA (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hoffmann</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eichner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Analysis of verbal and nonverbal acoustic signals with the Dresden UASR system</article-title>
          .
          <source>In: Verbal and Nonverbal Communication Behaviours. LNAI</source>
          , vol.
          <volume>4775</volume>
          , pp.
          <fpage>200</fpage>
          -
          <lpage>218</lpage>
          . Springer (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kölbl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Römer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirsching</surname>
          </string-name>
          , G.:
          <article-title>Semantische Dialogmodellierung mit gewichteten Merkmal-Werte-Relationen</article-title>
          . In: Hoffmann,
          <string-name>
            <surname>R</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of ”Elektronische Sprachsignalverarbeitung (ESSV)”. Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>53</volume>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>32</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kölbl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirsching</surname>
          </string-name>
          , G.:
          <article-title>Ein petrinetz-modell zur informationsübertragung per dialog</article-title>
          . In: Lohmann,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <surname>K</surname>
          </string-name>
          . (eds.) 15th German Workshop on Algorithms and
          <article-title>Tools for Petri Nets, Algorithmen und Werkzeuge für Petrinetze</article-title>
          ,
          <source>AWPN</source>
          <year>2008</year>
          , Rostock, Germany,
          <source>September 26-27</source>
          ,
          <year>2008</year>
          .
          <source>Proceedings. CEUR Workshop Proceedings</source>
          , vol.
          <volume>380</volume>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>24</lpage>
          . CEUR-WS.org (
          <year>2008</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>380</volume>
          /paper03.pdf
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Römer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Little Drop of Mulligatawny Soup, Miss Sophie? Automatic Speech Understanding provided by Petri Nets</article-title>
          . In: Trouvain,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Steiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Möbius</surname>
          </string-name>
          ,
          <string-name>
            <surname>B</surname>
          </string-name>
          . (eds.) Proceedings of ”
          <article-title>Elektronische Sprachsignalverarbeitung (ESSV)”</article-title>
          .
          <source>Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>86</volume>
          , pp.
          <fpage>122</fpage>
          -
          <lpage>129</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (Mar
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andreas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bauer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Hermann,
          <string-name>
            <given-names>K.M.</given-names>
            ,
            <surname>Knight</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Semantics-based machine translation with hyperedge replacement grammars</article-title>
          .
          <source>In: COLING</source>
          . pp.
          <fpage>1359</fpage>
          -
          <lpage>1376</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Lorenz</surname>
          </string-name>
          , R.:
          <article-title>Modeling Quantitative Aspects of Concurrent Systems Using Weighted Petri Net Transducers</article-title>
          . In: Devillers,
          <string-name>
            <given-names>R.R.</given-names>
            ,
            <surname>Valmari</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.)
          <article-title>Application and Theory of Petri Nets</article-title>
          and Concurrency - 36th International Conference,
          <source>PETRI NETS</source>
          <year>2015</year>
          , Brussels, Belgium, June 21-26,
          <year>2015</year>
          ,
          <source>Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>9115</volume>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>76</lpage>
          . Springer (
          <year>2015</year>
          ), http://dx.doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -19488-
          <issue>2</issue>
          _
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Petri net transducers in semantic dialogue modelling</article-title>
          . In: Wolff, M. (ed.)
          <source>Proceedings of ”Elektronische Sprachsignalverarbeitung (ESSV)”. Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>64</volume>
          , pp.
          <fpage>286</fpage>
          -
          <lpage>297</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Realizing the Translation of Utterances into Meanings by Petri Net Transducers</article-title>
          . In: Wagner,
          <string-name>
            <surname>P</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of ”Elektronische Sprachsignalverarbeitung (ESSV)”. Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>65</volume>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>110</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirsching</surname>
          </string-name>
          , G.:
          <article-title>On Weighted Petri Net Transducers</article-title>
          . In: Ciardo,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Kindler</surname>
          </string-name>
          , E. (eds.)
          <article-title>Application and Theory of Petri Nets</article-title>
          and Concurrency - 35th International Conference,
          <source>PETRI NETS</source>
          <year>2014</year>
          , Tunis, Tunisia, June 23-27,
          <year>2014</year>
          .
          <source>Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>8489</volume>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>252</lpage>
          . Springer (
          <year>2014</year>
          ), http://dx.doi. org/10.1007/978-3-
          <fpage>319</fpage>
          -07734-5_
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Pust</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hermjakob</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Knight</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>May</surname>
          </string-name>
          , J.:
          <article-title>Parsing english into abstract meaning representation using syntax-based machine translation</article-title>
          .
          <source>Training</source>
          <volume>10</volume>
          ,
          <fpage>218</fpage>
          -
          <lpage>021</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Quernheim</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Knight</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Towards probabilistic acceptors and transducers for feature structures</article-title>
          .
          <source>In: Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation</source>
          . pp.
          <fpage>76</fpage>
          -
          <lpage>85</lpage>
          . Association for Computational Linguistics (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Ramachandran</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ratnaparkhi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Belief tracking with stacked relational trees</article-title>
          .
          <source>In: 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue</source>
          . p.
          <volume>68</volume>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Ramachandran</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yeh</surname>
            ,
            <given-names>P.Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jarrold</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Douglas</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ratnaparkhi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Provine</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Emfield</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>An end-to-end dialog system for tv program discovery</article-title>
          .
          <source>In: Spoken Language Technology Workshop (SLT)</source>
          ,
          <year>2014</year>
          IEEE. pp.
          <fpage>602</fpage>
          -
          <lpage>607</lpage>
          . IEEE (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Römer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirsching</surname>
          </string-name>
          , G.:
          <article-title>Ein beitrag zu den natur- und geisteswissenschaftlichen grundlagen kognitiver systeme</article-title>
          . In: Wagner,
          <string-name>
            <surname>P</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of ”Elektronische Sprachsignalverarbeitung (ESSV)”. Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>65</volume>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>102</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Valk</surname>
          </string-name>
          , R.:
          <article-title>Self-modifying nets, a natural extension of petri nets</article-title>
          .
          <source>Automata, Languages and Programming</source>
          pp.
          <fpage>464</fpage>
          -
          <lpage>476</lpage>
          (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Wirsching</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kölbl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Römer</surname>
          </string-name>
          , R.:
          <article-title>Semantic dialogue modeling</article-title>
          . In: Esposito,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.M.</given-names>
            ,
            <surname>Vinciarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Hoffmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Müller</surname>
          </string-name>
          , V.C. (eds.)
          <source>COST 2102 Training School. Lecture Notes in Computer Science</source>
          , vol.
          <volume>7403</volume>
          , pp.
          <fpage>104</fpage>
          -
          <lpage>113</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>UASR: Unified Approach to Signal Synthesis and Recognition (2000-</article-title>
          ...). Online: https://www.b-tu.de/en/fg-kommunikationstechnik/research/ projects/uasr, last visited:
          <volume>31</volume>
          .
          <fpage>03</fpage>
          .2017
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tschöpe</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Römer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirsching</surname>
          </string-name>
          , G.:
          <article-title>Subsymbol-Symbol-Transduktoren</article-title>
          . In: Wagner,
          <string-name>
            <surname>P</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of ”Elektronische Sprachsignalverarbeitung (ESSV)”. Studientexte zur Sprachkommunikation</source>
          , vol.
          <volume>65</volume>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>204</lpage>
          . TUDpress,
          <string-name>
            <surname>Dresden</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>