<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Quantum Machine Learning: Benefits and Practical Examples</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Frank</forename><surname>Phillipson</surname></persName>
							<email>frank.phillipson@tno.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">TNO</orgName>
								<address>
									<addrLine>Anna van Buerenplein 1</addrLine>
									<postCode>2595 DA Den</postCode>
									<settlement>Haag</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Quantum Machine Learning: Benefits and Practical Examples</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">046E1920D0D51FAB9A7A652D66435145</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T22:02+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Quantum Machine Learning</term>
					<term>Quantum Computing</term>
					<term>Near Future Quantum Applications</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A quantum computer that is useful in practice, is expected to be developed in the next few years. An important application is expected to be machine learning, where benefits are expected on run time, capacity and learning efficiency. In this paper, these benefits are presented and for each benefit an example application is presented. A quantum hybrid Helmholtz machine use quantum sampling to improve run time, a quantum Hopfield neural network shows an improved capacity and a variational quantum circuit based neural network is expected to deliver a higher learning efficiency.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Quantum computers make use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data <ref type="bibr" target="#b0">[1]</ref>. Where classical computers require the data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. These computers would theoretically be able to solve certain problems much more quickly than any classical computer that use even the best currently known algorithms. Examples are integer factorization using Shor's algorithm or the simulation of quantum many-body systems. This benefit is also called 'quantum supremacy' <ref type="bibr" target="#b1">[2]</ref>, which only recently has been claimed for the first time <ref type="bibr" target="#b2">[3]</ref>. There are two different quantum computing paradigms. The first is gate-based quantum computing, which is closely related to classical digital computers. Making gatebased quantum computers is hard, and state-of-the-art devices therefore typically have only a few qubits. The second paradigm is quantum annealing, based on the work of <ref type="bibr" target="#b3">[4]</ref>. A practically usable quantum computer is expected to be developed in the next few years. In less than ten years quantum computers will begin to outperform everyday computers, leading to breakthroughs in artificial intelligence, the discovery of new pharmaceuticals and beyond. Currently, various parties are developing quantum chips, which are the basis of the quantum computer, such as Google, IBM, Intel, Rigetti, QuTech, D-Wave and IonQ <ref type="bibr" target="#b4">[5]</ref>. The size of these computers is limited, with the stateof-the-art being around 70 qubits for gate-based quantum computers and 5000 for quantum annealers. In the meantime, progress is being made on algorithms that can be executed on those quantum computers and on the software (stack) to enable the execution of quantum algorithms on quantum hardware.</p><p>One of the promising candidates to show a useful quantum advantage on near-term devices, so called noisy intermediate-scale quantum (NISQ) devices is believed to be machine learning. Different types of machine learning exist, most of them boiling down to supplying data to a computer, which then learns to produce a required outcome. The more data is given, the closer the outcome will be to the actual solution or the higher the probability will be that the 'correct solution' is found. Even though machine learning, using classical computers, has solved numerous problems and improved approximate solutions of many others, it also has its limitations. Training machine learning models requires many data samples and models may require a long time to be trained or produce correct answers.</p><p>In this short paper we sketch some near future machine learning applications using gate-based quantum computers. In Section 2 we give an introduction to Quantum Machine Learning and its expected benefits. In Section 3 a few example applications are given from our own research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Quantum Machine Learning</head><p>Machine learning is a potential interesting application for quantum computing <ref type="bibr" target="#b5">[6]</ref>. Current classical approaches ask huge computational resources and in many cases training costs a lot of time. In machine learning, the machine learns from experience, using data examples, without a user or programmer giving it explicit instructions; the machine builds its own logic. Looking at classical machine learning, one can distinguish various types: ▪ Supervised learninghere labelled data is used, e.g. for classification problems. This means that the data that is used for learning contains information about the class it belongs to. ▪ Unsupervised learninghere you use unlabelled data for, e.g., clustering problems.</p><p>Here data points have to be assigned to a certain cluster of similar points, without prior information. ▪ Semi-supervised learninghere partially labelled data is available and models are investigated to improve classification using labelled data with additional unlabelled data. Many of these models use generative, probabilistic methods. ▪ Reinforcement learninghere no labelled data is available, but a method is used to quantify the machine's performance in the form of rewards. The machine tries many different options and learns which actions are best based on the feedback (rewards) it receives.</p><p>If we think about where quantum computing and machine learning meet, we could think of the input and/or the processing part being classical or quantum <ref type="bibr" target="#b6">[7]</ref>, giving four combinations. If both are classical, we have classical machine learning. Classical machine learning can be used to support quantum computing, for example in quantum error correction. Quantum processes can also be used as an inspiration for classical algorithms, such as tensor networks, simulated annealing and in optimization. If the input is a quantum state and the computing is classical, the machine learning routine is used to translate quantum information into classical information. If both the input and processing are quantum, this will be real quantum machine learning, however, only a few results in this direction are published yet. In most quantum machine learning research, however, the focus is on the fourth case where the input contains classical information and processing is quantum.</p><p>One of the main benefits of quantum computers is the potential improvement in computational speed. Depending on the type of problem and algorithm, quantum algorithms can have a polynomial or super-polynomial (exponential) speed-up compared to classical algorithms. However, other benefits are expected more relevant in the near future. Quantum computers could possibly learn from less data, deal with more complex structures or could be better in coping with noisy data. In short, the three main benefits of quantum machine learning are (interpretation based on <ref type="bibr" target="#b7">[8]</ref>): ▪ Improvements in run-time: obtaining faster results; ▪ Learning capacity improvements: increase of the capacity of associative or contentaddressable memories; ▪ Learning efficiency improvements: less training information or simpler models needed to produce the same results or more complex relations can be learned from the same data. For each of these benefits we show some examples in Section 3.</p><p>The improvement in run-time can be realized in various ways. Machine learning consists mainly of optimization tasks that might be done faster by quantum annealers, like the D-Wave machine. Another way of getting a speed-up is the use of quantum sampling in generative models. Sampling is one of the tasks on which a quantum computer is expected to outperform classical computers already in the near future. One of the first algorithms that are expected to outperform classical algorithms are hybrid quantum-classical algorithms. These hybrid algorithms perform a part of the algorithm classically and a part on a quantum machine, using the specific benefits such as for example efficient sampling. The last way to realize the speed-up is via specific quantum machine learning algorithms using amplitude amplification and amplitude encoding. Amplitude amplification is a technique in quantum computing and is known to give a quadratic speed-up in comparison with classical approaches. In amplitude encoding, amplitudes of qubits are used to store data vectors efficient, enabling exponential speedup. However, this exponential speed-up is not obvious and the assumptions made to come to this theoretical speed-up have some huge technological challenges, see also <ref type="bibr" target="#b8">[9]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Examples</head><p>In the previous section three main benefits are given for Quantum Machine Learning: improvements in runtime, capacity and learning efficiency. For all three categories we give an example based on our own research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Improving Runtime -Quantum Hybrid Helmholtz Machine</head><p>As mentioned before, hybrid algorithms perform a part of the algorithm classically and a part on a quantum machine, using the specific benefits such as, for example, efficient sampling. Generative modelling is a task where such hybrid algorithms are the solution to challenges encountered with classical computing. These generative models are used, for example, to learn probability distributions over (high dimensional) data sets. By increasing the depth of a generative model, the generalization capability grows and more abstract representations of the data can be found, however, at the cost of intractable inference and training. Both inference and training rely on variational approximations and Markov Chain Monte Carlo sampling, both computationally expensive. As quantum computers allow for efficient sampling, the expensive sampling subroutine can be run on a quantum computer, thus reducing the computational complexity of generative models significantly. This can be used in the implementation of a hybrid Helmholtz machine, a special type of generative model, on a gate-based quantum computer <ref type="bibr" target="#b9">[10]</ref> or on a annealing device <ref type="bibr" target="#b10">[11]</ref>. A Helmholtz machine is an artificial Neural Network consisting of a bottom-up recognition network and a top-down generative network. The recognition network takes data and produces probability distributions over it, while the generative network generates representations of the data and the hidden variables.</p><p>In <ref type="bibr" target="#b9">[10]</ref>, a Parameterized Shallow Quantum Circuit implementation of a hybrid Helmholtz machine is given. This circuit captures aspects of Bayesian Networks and Helmholtz machines and is trained by a gradient-free optimizer under an adaptation of the Wake-Sleep-algorithm. The implementation of this circuit is done on the Quantum Inspire simulator (www.quantum-inspire.com) and has the potential to be run on a few qubit quantum device. The proposed hybrid Helmholtz machine was tested for a small problem on the BAS22-data-set <ref type="bibr" target="#b11">[12]</ref>, consisting of two by two pixel images, of which the valid patterns are those that contain only bars or only stripes. For both the hybrid and the classical Helmholtz machine four visible nodes and three hidden ones are used. The used Powell optimization method gave a promising set of parameters, for which the hybrid Helmholtz machine outperformed its classical counterpart.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Improving capacity -Quantum Hopfield Neural Network</head><p>Neural networks are a subclass of machine learning algorithms, consisting of nodes that can be connected in various configurations and interact with each other via weighted edges. As special case, Hopfield neural networks (HNN), consists of a single layer of nodes, all connected with one another via symmetric edges and without self-connections. Due to this connectivity, HNNs can be used as associative memories, meaning that they can store a set of patterns and associate noisy inputs with the closest stored pattern. Memory patterns can be imprinted onto the network by the use of training schemes, for instance Hebbian learning. Here, the weights are calculated directly from all memory patterns, and thereby only a low computational effort is required. It is possible to store an exponential number of stable attractors in an HNN if the set of attractors is predetermined and fixed. In general, however, less patterns can be stored if they are randomly selected, resulting in a very limited storage capacity of HNNs. For Hebbian learning the storage capacity of an HNN with n nodes is n/(4 log n) patterns asymptotically <ref type="bibr" target="#b12">[13]</ref>. Translating HNNs to counterparts in the quantum domain is assumed to offer storage capacities beyond the reach of classical networks <ref type="bibr" target="#b13">[14]</ref>. For example, in <ref type="bibr" target="#b14">[15]</ref> a quantum HNN is proposed that could offer an exponential capacity when qutrits are used.</p><p>In <ref type="bibr" target="#b15">[16]</ref> a quantum feed-forward representation of an HNN is presented, where the unitaries are trained on a training set. The performance is compared to that of a classical HNN using Hebbian learning, using three increasingly strict error rates. The 'strict error rate' only considers the patterns the HNN should memorize and equals one if at least one bit of any memory pattern cannot be retrieved. The 'message error rate' is less strict and equals the fraction of the probe vectors from which the memory cannot be recovered exactly. 'bit error rate' also uses probe vectors but considers all bits separately that cannot be retrieved correctly. Using a quantum computer simulator, only small patterns can be tested. These tests show that all the error rates of the quantum model are smaller than the classical model gives, indicating a higher capacity. Fig. <ref type="figure">1</ref>. A quantum neural network consists of a layered structure with parameter-dependent unitary operations. The lines correspond to qubits, with the lowest one being the readout qubit and the other ones being the data qubits.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Improving learning -Variational Quantum Circuit for Machine Learning</head><p>A recently very popular approach to find and implement new hybrid QML algorithms are so called variational quantum circuits (VQC), which consist of a number of quantum gates with parameters that are optimized. These quantum circuits can be used to evaluate some cost function. To optimize the cost function, a variety of classical strategies can be used, which in turn may again employ a quantum circuit.</p><p>In <ref type="bibr" target="#b16">[17]</ref> a VQC is proposed with a sequence of unitary gate operations depending on continuous variables, used for binary classification. In <ref type="bibr" target="#b17">[18]</ref> this framework is presented in more detail, and a more efficient representation of data is integrated, which is important for implementations on real near-term quantum devices. In this neural network, a classical input bit string of length n is translated to the initial state of a quantum register (qubit encoding) with n+1 qubits, where the last qubit is regarded as readout qubit to estimate the classification of the sample. A set of parameter-dependent (θ) unitaries acts on all qubits sequentially as shown in Fig. <ref type="figure">1</ref>. The loss function then has to be defined and minimized, depending on the parameter θ. For this, the parameters are updated using a stochastic gradient descent method.</p><p>Instead of qubit encoding, <ref type="bibr" target="#b17">[18]</ref> proposes a more compact presentation of data using amplitude encoding, which means that a bit string is mapped to a superposition of computational basis states of a register. The implementation in <ref type="bibr" target="#b17">[18]</ref> is done using simulations, and can easily be applied to real quantum devices with only minor adaptions. Overall, this proposed model of a supervised quantum machine learning algorithm seems promising for implementation on actual (NISQ) quantum devices in the near future.</p></div>		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Quantum mechanical computers</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Feynman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Optics News</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">11</biblScope>
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Quantum computing and the entanglement frontier</title>
		<author>
			<persName><forename type="first">J</forename><surname>Preskill</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">25th Solvay Conference on Physics</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Quantum supremacy using a programmable superconducting processor</title>
		<author>
			<persName><forename type="first">F</forename><surname>Arute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Arya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Babbush</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bacon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Bardin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Barends</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">574</biblScope>
			<biblScope unit="page" from="505" to="510" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Quantum annealing in the transverse ising model</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kadowaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nishimori</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Phys. Rev. E</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="5355" to="5363" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Resch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">R</forename><surname>Karpuzcu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1905.07240v2</idno>
		<title level="m">Quantum Computing: An Overview Across the System Stack</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Machine learning in the quantum era</title>
		<author>
			<persName><forename type="first">N</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Phillipson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Versluis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digitale Welt</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="24" to="29" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Supervised learning with quantum computers</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schuld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Petruccione</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Machine learning &amp; artificial intelligence in the quantum domain: a review of recent progress</title>
		<author>
			<persName><forename type="first">V</forename><surname>Dunjko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Briegel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Reports on Progress in Physics</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page">74001</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Read the fine print</title>
		<author>
			<persName><forename type="first">S</forename><surname>Aaronson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature Physics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="291" to="293" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Hybrid Helmholtz Machines A gate-based quantum circuit implementation</title>
		<author>
			<persName><forename type="first">T</forename><surname>Dam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M P</forename><surname>Van, Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Phillipson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Berg</surname></persName>
		</author>
		<author>
			<persName><surname>Van Den</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Quantum Information Processing</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Quantum-assisted Helmholtz machines: a quantum-classical deep learning framework for industrial datasets in near-term devices</title>
		<author>
			<persName><forename type="first">M</forename><surname>Benedetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Realpe-Gómez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Perdomo-Ortiz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Quantum Science and Technology</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Information theory, inference and learning algorithms</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Mackay</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>Cambridge university press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The capacity of the Hopfield associative memory</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mceliece</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Posner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rodemich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Venkatesh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Theory</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="461" to="482" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Quantum Hopfield neural network</title>
		<author>
			<persName><forename type="first">P</forename><surname>Rebentrost</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Bromley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Weedbrook</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lloyd</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physical Review A</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">42308</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Quantum associative memory with exponential capacity</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ventura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Martinez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Joint Conference on Neural Networks Proceedings</title>
				<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="509" to="513" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Meinhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M P</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Phillipson</surname></persName>
		</author>
		<title level="m">Quantum Hopfield neural networks: A new approach and its storage capacity</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">TNO Report</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Classification with quantum neural networks on near term processors</title>
		<author>
			<persName><forename type="first">E</forename><surname>Farhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Neven</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1802.06002</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Implementation of a Variational Quantum Circuit for Machine Learning with Compact Data Representation</title>
		<author>
			<persName><forename type="first">N</forename><surname>Meinhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Dekker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Phillipson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">First International Symposium on Applied Artificial Intelligence</title>
				<meeting><address><addrLine>München (Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
