<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Quantum Machine Learning: Benefits and Practical Examples</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>nk Phillipson</string-name>
          <email>frank.phillipson@tno.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>TNO</institution>
          ,
          <addr-line>Anna van Buerenplein 1, 2595 DA Den Haag</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <fpage>51</fpage>
      <lpage>56</lpage>
      <abstract>
        <p>A quantum computer that is useful in practice, is expected to be developed in the next few years. An important application is expected to be machine learning, where benefits are expected on run time, capacity and learning efficiency. In this paper, these benefits are presented and for each benefit an example application is presented. A quantum hybrid Helmholtz machine use quantum sampling to improve run time, a quantum Hopfield neural network shows an improved capacity and a variational quantum circuit based neural network is expected to deliver a higher learning efficiency.</p>
      </abstract>
      <kwd-group>
        <kwd>Quantum Machine Learning</kwd>
        <kwd>Quantum Computing</kwd>
        <kwd>Near Future Quantum Applications</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Quantum computers make use of quantum-mechanical phenomena, such as
superposition and entanglement, to perform operations on data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Where classical computers
require the data to be encoded into binary digits (bits), each of which is always in one
of two definite states (0 or 1), quantum computation uses quantum bits, which can be
in superpositions of states. These computers would theoretically be able to solve certain
problems much more quickly than any classical computer that use even the best
currently known algorithms. Examples are integer factorization using Shor's algorithm or
the simulation of quantum many-body systems. This benefit is also called ‘quantum
supremacy’ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which only recently has been claimed for the first time [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        There are two different quantum computing paradigms. The first is gate-based
quantum computing, which is closely related to classical digital computers. Making
gatebased quantum computers is hard, and state-of-the-art devices therefore typically have
only a few qubits. The second paradigm is quantum annealing, based on the work of
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. A practically usable quantum computer is expected to be developed in the next few
years. In less than ten years quantum computers will begin to outperform everyday
computers, leading to breakthroughs in artificial intelligence, the discovery of new
pharmaceuticals and beyond. Currently, various parties are developing quantum chips,
which are the basis of the quantum computer, such as Google, IBM, Intel, Rigetti,
QuTech, D-Wave and IonQ [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The size of these computers is limited, with the
stateof-the-art being around 70 qubits for gate-based quantum computers and 5000 for
quantum annealers. In the meantime, progress is being made on algorithms that can be
executed on those quantum computers and on the software (stack) to enable the
execution of quantum algorithms on quantum hardware.
      </p>
      <p>One of the promising candidates to show a useful quantum advantage on near-term
devices, so called noisy intermediate-scale quantum (NISQ) devices is believed to be
machine learning. Different types of machine learning exist, most of them boiling down
to supplying data to a computer, which then learns to produce a required outcome. The
more data is given, the closer the outcome will be to the actual solution or the higher
the probability will be that the ‘correct solution’ is found. Even though machine
learning, using classical computers, has solved numerous problems and improved
approximate solutions of many others, it also has its limitations. Training machine learning
models requires many data samples and models may require a long time to be trained
or produce correct answers.</p>
      <p>In this short paper we sketch some near future machine learning applications using
gate-based quantum computers. In Section 2 we give an introduction to Quantum
Machine Learning and its expected benefits. In Section 3 a few example applications are
given from our own research.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Quantum</title>
    </sec>
    <sec id="sec-3">
      <title>Machine Learning</title>
      <p>
        Machine learning is a potential interesting application for quantum computing [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Current classical approaches ask huge computational resources and in many cases training
costs a lot of time. In machine learning, the machine learns from experience, using data
examples, without a user or programmer giving it explicit instructions; the machine
builds its own logic. Looking at classical machine learning, one can distinguish various
types:
▪ Supervised learning – here labelled data is used, e.g. for classification problems.
      </p>
      <p>This means that the data that is used for learning contains information about the
class it belongs to.
▪ Unsupervised learning – here you use unlabelled data for, e.g., clustering problems.</p>
      <p>Here data points have to be assigned to a certain cluster of similar points, without
prior information.
▪ Semi-supervised learning – here partially labelled data is available and models are
investigated to improve classification using labelled data with additional unlabelled
data. Many of these models use generative, probabilistic methods.
▪ Reinforcement learning – here no labelled data is available, but a method is used to
quantify the machine’s performance in the form of rewards. The machine tries many
different options and learns which actions are best based on the feedback (rewards)
it receives.</p>
      <p>
        If we think about where quantum computing and machine learning meet, we could
think of the input and/or the processing part being classical or quantum [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], giving four
combinations. If both are classical, we have classical machine learning. Classical
machine learning can be used to support quantum computing, for example in quantum
error correction. Quantum processes can also be used as an inspiration for classical
algorithms, such as tensor networks, simulated annealing and in optimization. If the
input is a quantum state and the computing is classical, the machine learning routine is
used to translate quantum information into classical information. If both the input and
processing are quantum, this will be real quantum machine learning, however, only a
few results in this direction are published yet. In most quantum machine learning
research, however, the focus is on the fourth case where the input contains classical
information and processing is quantum.
      </p>
      <p>
        One of the main benefits of quantum computers is the potential improvement in
computational speed. Depending on the type of problem and algorithm, quantum
algorithms can have a polynomial or super-polynomial (exponential) speed-up compared to
classical algorithms. However, other benefits are expected more relevant in the near
future. Quantum computers could possibly learn from less data, deal with more
complex structures or could be better in coping with noisy data. In short, the three main
benefits of quantum machine learning are (interpretation based on [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]):
▪ Improvements in run-time: obtaining faster results;
▪ Learning capacity improvements: increase of the capacity of associative or
contentaddressable memories;
▪ Learning efficiency improvements: less training information or simpler models
needed to produce the same results or more complex relations can be learned from
the same data.
      </p>
      <p>For each of these benefits we show some examples in Section 3.</p>
      <p>
        The improvement in run-time can be realized in various ways. Machine learning
consists mainly of optimization tasks that might be done faster by quantum annealers,
like the D-Wave machine. Another way of getting a speed-up is the use of quantum
sampling in generative models. Sampling is one of the tasks on which a quantum
computer is expected to outperform classical computers already in the near future. One of
the first algorithms that are expected to outperform classical algorithms are hybrid
quantum-classical algorithms. These hybrid algorithms perform a part of the algorithm
classically and a part on a quantum machine, using the specific benefits such as for
example efficient sampling. The last way to realize the speed-up is via specific quantum
machine learning algorithms using amplitude amplification and amplitude encoding.
Amplitude amplification is a technique in quantum computing and is known to give a
quadratic speed-up in comparison with classical approaches. In amplitude encoding,
amplitudes of qubits are used to store data vectors efficient, enabling exponential
speedup. However, this exponential speed-up is not obvious and the assumptions made to
come to this theoretical speed-up have some huge technological challenges, see also
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-4">
      <title>Examples</title>
      <p>In the previous section three main benefits are given for Quantum Machine Learning:
improvements in runtime, capacity and learning efficiency. For all three categories we
give an example based on our own research.</p>
      <sec id="sec-4-1">
        <title>Improving Runtime – Quantum Hybrid Helmholtz Machine</title>
        <p>
          As mentioned before, hybrid algorithms perform a part of the algorithm classically and
a part on a quantum machine, using the specific benefits such as, for example, efficient
sampling. Generative modelling is a task where such hybrid algorithms are the solution
to challenges encountered with classical computing. These generative models are used,
for example, to learn probability distributions over (high dimensional) data sets. By
increasing the depth of a generative model, the generalization capability grows and
more abstract representations of the data can be found, however, at the cost of
intractable inference and training. Both inference and training rely on variational
approximations and Markov Chain Monte Carlo sampling, both computationally expensive. As
quantum computers allow for efficient sampling, the expensive sampling subroutine
can be run on a quantum computer, thus reducing the computational complexity of
generative models significantly. This can be used in the implementation of a hybrid
Helmholtz machine, a special type of generative model, on a gate-based quantum computer
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] or on a annealing device [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. A Helmholtz machine is an artificial Neural Network
consisting of a bottom-up recognition network and a top-down generative network. The
recognition network takes data and produces probability distributions over it, while the
generative network generates representations of the data and the hidden variables.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], a Parameterized Shallow Quantum Circuit implementation of a hybrid
Helmholtz machine is given. This circuit captures aspects of Bayesian Networks and
Helmholtz machines and is trained by a gradient-free optimizer under an adaptation of
the Wake-Sleep-algorithm. The implementation of this circuit is done on the Quantum
Inspire simulator (www.quantum-inspire.com) and has the potential to be run on a few
qubit quantum device. The proposed hybrid Helmholtz machine was tested for a small
problem on the BAS22-data-set [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], consisting of two by two pixel images, of which
the valid patterns are those that contain only bars or only stripes. For both the hybrid
and the classical Helmholtz machine four visible nodes and three hidden ones are used.
The used Powell optimization method gave a promising set of parameters, for which
the hybrid Helmholtz machine outperformed its classical counterpart.
3.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Improving capacity – Quantum Hopfield Neural Network</title>
        <p>
          Neural networks are a subclass of machine learning algorithms, consisting of nodes that
can be connected in various configurations and interact with each other via weighted
edges. As special case, Hopfield neural networks (HNN), consists of a single layer of
nodes, all connected with one another via symmetric edges and without
self-connections. Due to this connectivity, HNNs can be used as associative memories, meaning
that they can store a set of patterns and associate noisy inputs with the closest stored
pattern. Memory patterns can be imprinted onto the network by the use of training
schemes, for instance Hebbian learning. Here, the weights are calculated directly from
all memory patterns, and thereby only a low computational effort is required. It is
possible to store an exponential number of stable attractors in an HNN if the set of attractors
is predetermined and fixed. In general, however, less patterns can be stored if they are
randomly selected, resulting in a very limited storage capacity of HNNs. For Hebbian
learning the storage capacity of an HNN with n nodes is n/(4 log n) patterns
asymptotically [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Translating HNNs to counterparts in the quantum domain is
assumed to offer storage capacities beyond the reach of classical networks [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. For
example, in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] a quantum HNN is proposed that could offer an exponential capacity
when qutrits are used.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] a quantum feed-forward representation of an HNN is presented, where the
unitaries are trained on a training set. The performance is compared to that of a classical
HNN using Hebbian learning, using three increasingly strict error rates. The ‘strict error
rate’ only considers the patterns the HNN should memorize and equals one if at least
one bit of any memory pattern cannot be retrieved. The ‘message error rate’ is less strict
and equals the fraction of the probe vectors from which the memory cannot be
recovered exactly. The ‘bit error rate’ also uses probe vectors but considers all bits separately
that cannot be retrieved correctly. Using a quantum computer simulator, only small
patterns can be tested. These tests show that all the error rates of the quantum model
are smaller than the classical model gives, indicating a higher capacity.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>Improving learning - Variational Quantum Circuit for Machine Learning</title>
        <p>A recently very popular approach to find and implement new hybrid QML algorithms
are so called variational quantum circuits (VQC), which consist of a number of quantum
gates with parameters that are optimized. These quantum circuits can be used to
evaluate some cost function. To optimize the cost function, a variety of classical strategies
can be used, which in turn may again employ a quantum circuit.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] a VQC is proposed with a sequence of unitary gate operations depending on
continuous variables, used for binary classification. In [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] this framework is presented
in more detail, and a more efficient representation of data is integrated, which is
important for implementations on real near-term quantum devices. In this neural network,
a classical input bit string of length n is translated to the initial state of a quantum
register (qubit encoding) with n+1 qubits, where the last qubit is regarded as readout qubit
to estimate the classification of the sample. A set of parameter-dependent (θ) unitaries
acts on all qubits sequentially as shown in Fig. 1. The loss function then has to be
defined and minimized, depending on the parameter θ. For this, the parameters are
updated using a stochastic gradient descent method.
        </p>
        <p>
          Instead of qubit encoding, [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] proposes a more compact presentation of data using
amplitude encoding, which means that a bit string is mapped to a superposition of
computational basis states of a register. The implementation in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] is done using
simulations, and can easily be applied to real quantum devices with only minor adaptions.
Overall, this proposed model of a supervised quantum machine learning algorithm
seems promising for implementation on actual (NISQ) quantum devices in the near
future.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Feynman</surname>
            ,
            <given-names>R.P.</given-names>
          </string-name>
          :
          <article-title>Quantum mechanical computers</article-title>
          .
          <source>Optics News</source>
          <volume>11</volume>
          (
          <issue>2</issue>
          ),
          <volume>11</volume>
          (
          <year>1985</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Preskill</surname>
          </string-name>
          , J.:
          <article-title>Quantum computing and the entanglement frontier</article-title>
          .
          <source>In: 25th Solvay Conference on Physics</source>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Arute</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arya</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babbush</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bacon</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bardin</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barends</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>Quantum supremacy using a programmable superconducting processor</article-title>
          .
          <source>Nature</source>
          <volume>574</volume>
          ,
          <fpage>505</fpage>
          -
          <lpage>510</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kadowaki</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nishimori</surname>
          </string-name>
          , H.:
          <article-title>Quantum annealing in the transverse ising model</article-title>
          .
          <source>Phys. Rev. E</source>
          <volume>58</volume>
          ,
          <fpage>5355</fpage>
          -
          <lpage>5363</lpage>
          (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Resch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karpuzcu</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          R.:
          <article-title>Quantum Computing: An Overview Across the System Stack</article-title>
          . arXiv:
          <year>1905</year>
          .07240v2 (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phillipson</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Versluis</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>: Machine learning in the quantum era</article-title>
          .
          <source>Digitale Welt</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <fpage>24</fpage>
          -
          <lpage>29</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Schuld</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petruccione</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Supervised learning with quantum computers</article-title>
          . Springer (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dunjko</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Briegel</surname>
            ,
            <given-names>H. J.:</given-names>
          </string-name>
          <article-title>Machine learning &amp; artificial intelligence in the quantum domain: a review of recent progress</article-title>
          .
          <source>Reports on Progress in Physics</source>
          ,
          <volume>81</volume>
          (
          <issue>7</issue>
          ),
          <volume>074001</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Aaronson</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Read the fine print</article-title>
          .
          <source>Nature Physics</source>
          ,
          <volume>11</volume>
          (
          <issue>4</issue>
          ),
          <fpage>291</fpage>
          -
          <lpage>293</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Dam</surname>
            , T. van, Neumann,
            <given-names>N.M.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phillipson</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>J.L. van den</given-names>
          </string-name>
          :
          <article-title>Hybrid Helmholtz Machines A gate-based quantum circuit implementation</article-title>
          .
          <source>Quantum Information Processing</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Benedetti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Realpe-Gómez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perdomo-Ortiz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Quantum-assisted Helmholtz machines: a quantum-classical deep learning framework for industrial datasets in near-term devices</article-title>
          .
          <source>Quantum Science and Technology</source>
          ,
          <volume>3</volume>
          (
          <issue>3</issue>
          ), (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>MacKay</surname>
          </string-name>
          , D. J.:
          <article-title>Information theory, inference and learning algorithms</article-title>
          . Cambridge university press (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>McEliece</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Posner</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodemich</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Venkatesh</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The capacity of the Hopfield associative memory</article-title>
          .
          <source>IEEE Transactions on Information Theory</source>
          <volume>33</volume>
          (
          <issue>4</issue>
          ),
          <fpage>461</fpage>
          -
          <lpage>482</lpage>
          (
          <year>1987</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Rebentrost</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bromley</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weedbrook</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lloyd</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Quantum Hopfield neural network</article-title>
          .
          <source>Physical Review A</source>
          <volume>98</volume>
          (
          <issue>4</issue>
          ),
          <volume>042308</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Ventura</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Quantum associative memory with exponential capacity</article-title>
          .
          <source>In: IEEE International Joint Conference on Neural Networks Proceedings</source>
          . pp.
          <fpage>509</fpage>
          -
          <lpage>513</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Meinhardt</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>N.M.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phillipson</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Quantum Hopfield neural networks: A new approach and its storage capacity</article-title>
          ,
          <source>TNO Report</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Farhi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neven</surname>
          </string-name>
          , H.:
          <article-title>Classification with quantum neural networks on near term processors</article-title>
          . arXiv preprint arXiv:
          <year>1802</year>
          .
          <volume>06002</volume>
          , (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Meinhardt</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dekker</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phillipson</surname>
          </string-name>
          , F.:
          <article-title>'Implementation of a Variational Quantum Circuit for Machine Learning with Compact Data Representation'</article-title>
          ,
          <source>In: First International Symposium on Applied Artificial Intelligence, München (Germany)</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>