<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Modeling of Multiport Heteroassociative Memory (MBHM) on the Basis of Equivalence Models Implemented on Vector-Matrix Multipliers</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Saiko</surname></persName>
							<email>vgsaiko@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<settlement>Kyiv</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vladimir</forename><surname>Krasilenko</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Vinnytsia National Agrarian University</orgName>
								<address>
									<addrLine>st. Sonyachna, 3</addrLine>
									<postCode>21008</postCode>
									<settlement>Vinnytsia, Vinnytsia Oblast</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Illia</forename><surname>Chikov</surname></persName>
							<email>ilya95chikov@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Vinnytsia National Agrarian University</orgName>
								<address>
									<addrLine>st. Sonyachna, 3</addrLine>
									<postCode>21008</postCode>
									<settlement>Vinnytsia, Vinnytsia Oblast</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Diana</forename><surname>Nikitovych</surname></persName>
							<email>diananikitovych@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Vinnytsia National Agrarian University</orgName>
								<address>
									<addrLine>st. Sonyachna, 3</addrLine>
									<postCode>21008</postCode>
									<settlement>Vinnytsia, Vinnytsia Oblast</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Information Technology and Implementation (IT&amp;I-2023)</orgName>
								<address>
									<addrLine>November 20-21</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Modeling of Multiport Heteroassociative Memory (MBHM) on the Basis of Equivalence Models Implemented on Vector-Matrix Multipliers</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">716534A693316D7D522FDCA21DA1BDE1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Multiport associative memory</term>
					<term>equivalence model</term>
					<term>vector-matrix multiplier</term>
					<term>heteroassociative memory</term>
					<term>neural network</term>
					<term>associative memory capacity</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The work is devoted to consideration of issues related to associative processing of information for the purpose of hardware and software implementation of relevant mathematical, simulation and physical models of associative memory. On the basis of a review of known publications, including those in which equivalence models of associative and heteroassociative memory were considered and their advantages were shown, the need for further research of such models of AM or HAM and conducting simulation model experiments in order to find their effective implementations is substantiated. The results of simulation modeling of two implementations of multi-port heteroassociative memory (MHAM) based on vector-matrix multipliers and vector-matrix equivalents, which, as accelerators, are adapted to equivalence models, are given. The results of modeling the processes of hetero-associative letter tuple recognition, performed in Mathcad for two versions of the MHAM implementation, confirm their correct functioning, since the correct tuples of all 100 output letters pairwise associated with 100 input letters are formed by the models at the outputs-ports of the MHAM.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The last ten years have been marked by significant, almost radical, theoretical and practical achievements in the fields of artificial intelligence research. The neurocomputer boom, which on the next turn of the spiral of development, which began in the mid-80s, completely covered almost the entire world in the early 90s, practically does not subside even today. However, it presents scientists with more and more new problems and challenges. A significant increase in state support, expansion of sources and volumes of financing projects and developments of artificial intelligence, neurocomputing abroad led to the rapid formation of an entire research field. The practical results of this priority area were: the creation and organization of mass production of neurocomputers, neuroaccelerators (emulators) for PCs, neurochips and neural programs, which are widely used today in almost all areas of human activity. There are many examples of the practical application of neurotechnologies in defense and civil systems, which testify to the massive progress of neurotechnologies and robotic intelligent systems. At the same time, it became clear that the complexity of the task of creating new advanced intelligent systems was underestimated, since most promising applications require processing large amounts of information and also on a real-time scale. For example, we note that the latest modern models and architectures of convolutional neural networks and their modifications have more than a hundred layers. That is why the trend of transition from software emulation to software-hardware implementation of neural network models has emerged and is already partially implemented today. But despite the dramatic increase in the number of neurochips and accelerators for neuromodels, hardware implementations use either outdated and simplified models or traditional parallel computing systems. New specific requirements for hardware implementations are explained by the characteristic features of new architectural solutions when creating promising neural networks for certain applications. Such neural networks are specific parallel computing structures with a large number of layers, they are characterized by a large number of connections, an even larger number of parameters during training. All this creates difficulties when processing data or signals in such models with a large dimension of vectors, the number of variable model parameters, which are associated with the need to remember large arrays and samples, store a significant number of filters, activation maps, etc. The above allows us to conclude that effective implementations should be primarily focused on fast, high-performance processing, especially for tasks focused on associative processing of large-scale and multispectral images.</p><p>Thus, the urgent need and relevance of further research into the processes of natural intelligence, especially associative processing of information, including for the purpose of its hardware and software implementation based on the latest relevant mathematical, simulation and physical models of associative memory (AM), the most advanced recent biological and neurophysiological principles in neurocybernetics, due to the expanded use of artificial intelligence in recent decades and, based on it, modern intelligent decision support systems, remote monitoring systems, robotic systems for recognizing and identifying images of the most diverse types and in the most diverse problem areas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related works and their analysis</head><p>A number of neural models and networks implementing auto-associative and hetero-associative memory (HAM) are known . But the analysis of a significant number of similar thematic publications, including the above-mentioned <ref type="bibr">[7, 8, 11, 12, 14-16, 19-21, 24-28]</ref>, and newer and more recent works <ref type="bibr" target="#b28">[29]</ref><ref type="bibr" target="#b29">[30]</ref><ref type="bibr" target="#b30">[31]</ref><ref type="bibr" target="#b31">[32]</ref><ref type="bibr" target="#b32">[33]</ref><ref type="bibr" target="#b33">[34]</ref><ref type="bibr" target="#b34">[35]</ref><ref type="bibr" target="#b37">[38]</ref><ref type="bibr" target="#b38">[39]</ref><ref type="bibr" target="#b39">[40]</ref><ref type="bibr" target="#b40">[41]</ref><ref type="bibr" target="#b41">[42]</ref> shows that the capacity of such AM models, which is determined by the number of memorized and successfully associated (recognized) images, does not exceed (0.14-0.60)*N, where N is the number of neurons in the AM model <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. At the same time, even more than 10-15 years ago, works <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b27">28]</ref> appeared in which the socalled equivalence models (EMs) of neural networks (NNs) and AMs (HAMs) were proposed and studied by the authors, and in them experimentally their significant advantages compared to other known models and neural paradigms were confirmed, especially regarding their capacity and more convenient mapping to hardware parallel processors <ref type="bibr" target="#b37">[38]</ref><ref type="bibr" target="#b38">[39]</ref><ref type="bibr" target="#b39">[40]</ref>.</p><p>The capacity of such AM equivalence models (AM_EMs) can be at least 4-5 times higher than N, although partial model experiments will allow us to assert that their capacity is even orders of magnitude higher. In addition, AM_EMs allow a significantly greater power of interference that distorts and changes input images, in which associative responses are formed qualitatively and correctly. All this causes interest in further research of such AM/HAM_EMs in terms of finding their effective implementations. In works <ref type="bibr" target="#b25">[26]</ref><ref type="bibr" target="#b26">[27]</ref><ref type="bibr" target="#b27">[28]</ref><ref type="bibr" target="#b36">37]</ref>, four equivalence models were studied and modeled, as a result of which it was concluded that such models can be successfully applied to build not only single-port, but also multi-port, more general, HAM for processing and recognizing patterns with strong correlation and significant their damage by obstacles. Some possible hardware implementations of such equivalence models, modified for processing 1-D or 2-D images, were proposed and highlighted in works <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref>. But most of these proposals related only to single-port AM implementations based on the use of optical systems with spatial and time-pulse integration <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref>. Such implementations are purely analog, and therefore do not allow, as a significant increase in the dynamic range of the values of the processed signals, the accuracy of the calculation procedures, and especially the dimensions of the images (vectors or matrices) stored in the AM.</p><p>In addition, in the same works, based on the models, processing procedures and their description, it was determined that the application of the most generalized equivalence models for the implementation of the multiport HAM (MHAM) requires vector-matrix or matrix-tensor procedures with equivalence (non-equivalence) operations, or specialized accelerator devices, socalled vectormatrix equivalentors (VMEs). Since, as it was shown by the authors of the work <ref type="bibr" target="#b36">[37]</ref>, VMEs are implemented on the basis of two vector-matrix multipliers (VMMs), this allows the use of highperformance, high-speed vector-matrix specialized computers, matrix and systolic processors for the construction of MHAM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Formulation of Tasks</head><p>Argumentation and conclusions from the given review and analysis of publications make it possible to justify, as one of the urgent and important tasks, the need to develop, simulate and verify such equivalence models and implementations of MHAM, which would structurally best correspond to already existing parallel computers, for example, such as VMMs or VMEs, and the simplest, with minimal additional nodes or components, could be implemented in the shortest possible time. A secondary task is to evaluate the possible characteristics and indicators of such models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">The results of the study of EM MHAM on the basis of VMMs and VMEs</head><p>The theoretical foundations of the design of MHAM based on EMs were developed and given in works <ref type="bibr">[26-28, 37, 38]</ref>. And therefore, here, in order to verify such models and the quality of functioning of the proposed MHAM based on them, we consider only aspects of the modeling of the above-mentioned objects and analyze the obtained results. Let's note the advantages of such a concept and EMs. Namely, the use of such generalized neurobiological operations (continuous logic) <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref> as "equivalence" and "non-equivalence", "auto-equivalence" and "auto-non-equivalence" for building models of neural networks and associative memory made it possible to use the dualism of these generalized complementary operations and better recognize even contrast-inverted images, use different types of non-linear transformations. The authors of the concept showed that such equivalence models are more general (Hamming networks, Hopfield networks are their special cases) and allow to describe and model in them along with excitatory synaptic connections and inhibitory weighting coefficients, and moreover to use for this purpose not only bipolar coding of signals and data, as well as unipolar. And this simplifies the possible options for implementing models, including by reducing the range of power sources. But in these works, attention was mainly paid to the AM models themselves and only some of their implementations, and the results of machine simulation were not reflected <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref>, precisely from the point of view of the set goal. Therefore, in this work, we consider AM/HAM equivalence models, namely aspects of their implementations, in order to show their adequacy and advantages. And with the obtained results of machine computer simulation, try to involve element base developers in the new architectural solutions of multi-port associative memory proposed by us based on these new models and determine the system requirements for the element base of promising implementations.</p><p>For simulation modeling and conducting experiments in Mathcad, we created a training sample of mutually connected (associated with each other) different images in the form of vectors, which are elements of some set. For better visibility, visualization and accelerated perception and comparison, in one of the experiments, code vectors of ASC11 letter symbols were selected as images, while mostly letters were used. Each code vector of a letter, whether input or from the training set, is a quadruple repetition of an eight-bit binary Gray code (byte), the numerical equivalent of which is a number from the range 0 to 255, which is the character code. In the procedures for converting traditional binary codes into binary Gray codes, we used models based on equivalence (non-equivalence) operations vectorized in Mathcad. And we used the four-fold repetition of bytes of code vectors to increase the dimensionality of image vectors and for the ability to check the immunity to interference during pattern recognition in MHAM and in the conditions of an extended range of interference power, which is proportional to the number of changed bits in the code vectors.</p><p>The procedure for entering 128 characters (a fragment with a part of the various letters or symbols entered) is shown in figure <ref type="figure" target="#fig_0">1 (a)</ref>. Each character is coded by a byte in accordance with the accepted coding system. The same fragment shows the procedure for entering 100 English letters or symbols that will correspond to 100 input ports, each of which is supplied with a code vector of the corresponding letter or symbol. To form pairs of hetero-associated images, we matched each letter from the created 100-letter set with the next letter using a cyclic shift. figure <ref type="figure" target="#fig_1">2</ref>   <ref type="figure" target="#fig_2">3</ref>). That is, the first letter from the TX set was matched with the second letter, each subsequent letter was matched with the next letter, and the last letter was matched with the first letter. In figure <ref type="figure" target="#fig_2">3</ref>. (left) shows the formulas that we used to simulate the procedures for finding the necessary matrices (HN and HNN) in the first step using a set of two arrays of vector-matrix multipliers (VMMs).</p><p>These matrices are the normalized equivalences and non-equivalences of the compared vectors and are new metrics that complementarily reflect "similarity" and "dissimilarity", essentially "distance". Fragments of windows with the results of modeling procedures for calculating the output matrices (OUTY and OUTYN) of normalized equivalences and non-equivalences in the second step are also shown there. In addition, in figure <ref type="figure" target="#fig_2">3</ref> shows the formulas for the nonlinear transformation of signals of neurons of the hidden layer with the nonlinearity coefficient γ, which correspond to the component-wise nonlinear transformations of the NHN and NHNN matrices.</p><p>As can be seen from figure <ref type="figure" target="#fig_3">4</ref>, the use of operations and vectorized transformations that reproduce the threshold component-wise processing of sub-vectors and the formation of an array of output feedback vectors at the output of the BGAP with their subsequent transformation into output letterssymbols, made it possible to explain the process and form a tuple of output images.</p><p>The input and generated output letter tuples are shown in figure <ref type="figure" target="#fig_3">4</ref> below. Thus, the obtained simulation results confirm the correct functioning of the MHAM, as a tuple of 100 output letters paired with 100 input letters is formed on the outputs-ports of the MHAM. As you can see, all 100 letters at the output of the MHAM are recognized in accordance with the hetero-associations formed by the training sample, both when used for modeling the MHAM model based on VME (the results are shown in figure <ref type="figure" target="#fig_4">5</ref>), and when using the MHAM model based on VMM (results are shown in figure <ref type="figure" target="#fig_5">6</ref>). The formed tuples of symbols-letters, which are also displayed in figure <ref type="figure" target="#fig_4">5</ref>, testify to the successful reproduction of all associated pairs, namely the formation for each letter from the input tuple of the corresponding next one. Additional experiments showed that the modeled possible options for the implementation of the MPHAM allow damage to the code vectors by interference within the permissible limits (30-35%), which does not cause violations of the correct functioning of the MPHAM. These additional aspects of the obtained results will be reported in the report.  In addition, the results of the performed simulations show that for the correct functioning and improvement of the functional capabilities and indicators (capacity!) of the MNAM, especially in situations of association of noisy images and with a strong correlation between images, it is desirable to use equivalence models (EMs), which not only better describe, taking into account their duality, the processes of image comparison and determination of specific normalized metrics, but also more easily map onto parallel processing structures such as matrix-matrix or matrix-tensor multipliers. But additional components, without which the advantages of such models cannot be achieved, should be arrays of elements that implement component-wise nonlinear transformations of signals (intensities of image pixels or numerical values of matrix elements).   Moreover, the latter are basic operations in the most promising paradigms of convolutional neural network with adaptive selflearning <ref type="bibr" target="#b37">[38]</ref>. However, a review of the mathematical operators implemented by neurons allows us to draw the following conclusion. Almost all models <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b16">[17]</ref><ref type="bibr" target="#b17">[18]</ref><ref type="bibr" target="#b18">[19]</ref><ref type="bibr" target="#b19">[20]</ref><ref type="bibr" target="#b20">[21]</ref><ref type="bibr" target="#b21">[22]</ref><ref type="bibr" target="#b22">[23]</ref>, with rare exceptions, use mathematical models of neurons, which are reduced to the presence of two main mathematical components-operators: the first component calculates a function from two vectors (but most often it calculates a simple weighted sum), and the second component corresponds to a nonlinear transformation of the original value the first component in the output signal. Many works are devoted to the design of hardware devices that implement neuron activation functions, but they do not consider the design of autoequivalence transformation functions <ref type="bibr" target="#b37">[38]</ref> for EM <ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref><ref type="bibr" target="#b21">22]</ref>. It is desirable to design nodes that would implement the entire set of the most common arbitrary types of nonlinear transformations <ref type="bibr" target="#b42">[43]</ref>. Due to limitations, we do not provide links here. We partially solved the issue of the simplest approximations of autoequivalence functions (three-component approximation with a floating threshold) and modeled the developed cell schemes. The basic cell of this approximation consisted of only 18 -20 transistors and allowed to work with a conversion time from 1 to 2.5 μs. The diagram of the cell and the window with the results of its simulation are shown in figure <ref type="figure">7</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 7:</head><p>The results of the simulation of the MPHAM based on the MMM At the same time, the development of general theoretical approaches to the construction of correctors with any types of nonlinear transformations can be an interesting direction of our further research <ref type="bibr" target="#b42">[43]</ref>. The simulation results are also shown in figure. 8. Sets of associated images (above) and sets of input images damaged by interference in the form of symbols (letters) and corresponding heteroassociated output images formed by the model below) testify to the correct functioning of the models, see figure <ref type="figure" target="#fig_6">8</ref>. Additional experiments showed that the modeled possible variants of the MHAM implementation allow the code vectors to be damaged by interference within acceptable limits, which does not cause violations of the correct functioning of the MHAM. These additional aspects of the obtained results will be reported in the report, and they are partially shown in Fig. <ref type="figure" target="#fig_6">8</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head><p>Within the framework of this work, the set goal was achieved, namely, the principles of implementing multi-port heteroassociative memory on the basis of equivalence models and arrays of vector-matrix multipliers, which additionally contain some simple components of element-by-element special nonlinear transformations for simulating activation functions, were described and modeled. At the same time, some aspects important for the implementation of the proposed models and their circuittechnical solutions and the further expansion of the spheres of their use remained unexplored. This includes the implementation of physical models and their testing, the measurement of characteristics and indicators for the purpose of their comparative analysis with other concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>As a result of the development and modeling of MHAMs operating on the basis of equivalence models, the possibilities of MHAM implementations based on such hardware and software accelerators with parallel processing as vector-matrix multipliers and vector-matrix equivalents (essentially two multipliers) were confirmed, which in addition to their parallel execution of linear-algebraic procedures-operations would be endowed with the possibility of their parallel execution of componentwise nonlinear transformations. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">References</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A view of fragments from Mathcad windows that describe the procedures (a) for entering a set of input vectors and forming a training sample from it, formulas (b) for coding and forming a matrix of training vectors, formulas (c) for analog-to-digital conversion and coding of a set of input images. Using the formulas given at the bottom of figure 2, we formed a set of 128 associated training vectors in the form of TY and TYN matrices in accordance with each of the 128 training vectors (TX and TXN matrix) (they are shown transposed in figure3). That is, the first letter from the TX set was matched with the second letter, each subsequent letter was matched with the next letter, and the last letter was matched with the first letter. In figure3. (left) shows the formulas that we used to simulate the procedures for finding the necessary matrices (HN and HNN) in the first step using a set of two arrays of vector-matrix multipliers (VMMs).These matrices are the normalized equivalences and non-equivalences of the compared vectors and are new metrics that complementarily reflect "similarity" and "dissimilarity", essentially "distance". Fragments of windows with the results of modeling procedures for calculating the output matrices (OUTY and OUTYN) of normalized equivalences and non-equivalences in the second step are also shown there. In addition, in figure3shows the formulas for the nonlinear transformation of signals of neurons of the hidden layer with the nonlinearity coefficient γ, which correspond to the component-wise nonlinear transformations of the NHN and NHNN matrices.As can be seen from figure4, the use of operations and vectorized transformations that reproduce the threshold component-wise processing of sub-vectors and the formation of an array of output feedback vectors at the output of the BGAP with their subsequent transformation into output letterssymbols, made it possible to explain the process and form a tuple of output images.The input and generated output letter tuples are shown in figure4below. Thus, the obtained simulation results confirm the correct functioning of the MHAM, as a tuple of 100 output letters paired with 100 input letters is formed on the outputs-ports of the MHAM. As you can see, all 100 letters at the output of the MHAM are recognized in accordance with the hetero-associations formed by the training sample, both when used for modeling the MHAM model based on VME (the results are shown in figure5), and when using the MHAM model based on VMM (results are shown in figure6). The</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Represents the matrices corresponding to the input multivector input and the first half of the training sample</figDesc><graphic coords="5,97.25,465.25,400.32,224.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Image of the second (initial) half of the training sample in the form of images of matrices (right) and formulas of vector-matrix equivalence and nonlinear transformations of hidden layer neuron signals (left).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Mathcad window with formulas for modeling the activation functions and threshold processing when forming the array of responses of the MHAM in the form of output vectors.</figDesc><graphic coords="6,104.75,168.04,385.25,276.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The results of the simulation of the MHAM based on the VME in the form of the resulting images and the difference (third from the left in the form of a zero matrix) image, confirming the successful association of all 100 vectors.</figDesc><graphic coords="6,99.30,483.90,390.00,231.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The results of the simulation of the MHAM based on the VMMs.</figDesc><graphic coords="7,112.25,73.50,370.85,273.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. Modeling results</figDesc><graphic coords="8,194.75,447.10,205.28,188.70" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Neural Network Design</title>
	</analytic>
	<monogr>
		<title level="j">PWS Publ. Company, Chapter</title>
		<editor>Martin T. Hagan, Howard B. Demuth and Mark Hudson Beale</editor>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="13" to="14" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">An Introduction to Neural Networks</title>
		<editor>James A. Anderson</editor>
		<imprint>
			<date type="published" when="1997">1997</date>
			<publisher>MIT Press</publisher>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="143" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Neural Networks and Learning Machines (3rd Edition)</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Haykin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Prentice Hall</publisher>
			<biblScope unit="page">906</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Nonlinear neural networks: Principles, mechanisms, and architectures</title>
		<author>
			<persName><surname>Grossberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="17" to="61" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Modern Hopfield networks and attention for immune repertoire classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Widrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schäfl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pavlovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ramsauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gruber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Holzleitner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Brandstetter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Sandve</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Greiff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Klambauer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Ranzato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Hadsell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Balcan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="18832" to="18845" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A review of recurrent neural networks: LSTM cells and network architectures</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Si</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1162/neco_a_01199</idno>
		<ptr target="https://doi.org/10.1162/neco_a_01199" />
	</analytic>
	<monogr>
		<title level="j">Neural Comput</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="1235" to="1270" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Neural networks and physical systems with emergent collective computational abilities</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Hopfield</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. Natl. Acad. Sci</title>
				<meeting>Natl. Acad. Sci</meeting>
		<imprint>
			<date type="published" when="1982">1982</date>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="2554" to="2558" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Silicon photonic neural networks</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Tait</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<pubPlace>Princeton</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Princeton University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Photonic multiply-accumulate operations for neural networks</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Nahmias</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE J. Sel. Top. Quantum Electron</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page">7701518</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Carpenter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Grossberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Reynolds</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="565" to="588" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A theoretical investigation into the performance of the Hopfield model</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V B</forename><surname>Aiyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Niranjan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fallside</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="204" to="215" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Efficient timetabling formulations for Hopfield neural networks</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Abramson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Duke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Smart Engineering System Design: Neural Networks, Fuzzy Logic, Evolutionary Programming, Data Mining, and Complex Systems</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Dagli</surname></persName>
		</editor>
		<imprint>
			<publisher>ASME Press</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1027" to="1032" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Modern Hopfield networks for few-and zero-shot reaction prediction</title>
		<author>
			<persName><forename type="first">P</forename><surname>Seidl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Renz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dyubankova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Neves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Verhoeven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Wegner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Klambauer</surname></persName>
		</author>
		<idno>ArXiv, 2104.03279</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Self-Organization and Associative Memory</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kohonen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1988">1988</date>
			<publisher>Springer</publisher>
			<pubPlace>Berlin</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Constructing an associative memory</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kosko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Byte</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="137" to="144" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Bi-directional associative memories</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kosko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man and Cybernetics</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="49" to="60" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A Review of Associative Memory Models for Neural Networks</title>
		<author>
			<persName><forename type="first">J</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Johnson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1234" to="1246" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Bidirectional Associative Memory for Pattern Recognition in Deep Learning</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="2100" to="2113" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Capacities of multiconnected memory models</title>
		<author>
			<persName><forename type="first">D</forename><surname>Horn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Usher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Phys. France</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="389" to="395" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">High order correlation model for associative memory</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">C</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">Y</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Maxwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Lee</forename><surname>Giles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AIP Conference Proceedings</title>
				<imprint>
			<date type="published" when="1986">1986</date>
			<biblScope unit="volume">151</biblScope>
			<biblScope unit="page" from="86" to="99" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Storage capacity of kernel associative memories</title>
		<author>
			<persName><forename type="first">B</forename><surname>Caputo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Niemann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Artificial Neural Networks (ICANN)</title>
				<meeting>the International Conference on Artificial Neural Networks (ICANN)<address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="51" to="56" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">On a Model of Associative Memory with Huge Storage Capacity</title>
		<author>
			<persName><forename type="first">Mete</forename><surname>Demircigil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judith</forename><surname>Heusel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthias</forename><surname>Lowe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sven</forename><surname>Upgang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Franck</forename><surname>Vermet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Statistical Physics</title>
		<imprint>
			<biblScope unit="volume">168</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="288" to="299" />
			<date type="published" when="2017-07">July 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Unsupervised learning using pretrained cnn and associative memory bank</title>
		<author>
			<persName><forename type="first">Qun</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Supratik</forename><surname>Mukhopadhyay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="08" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Optical associative memory for high-order correlation patterns</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kiselyov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kulakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mikaelian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Shkitin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Opt. Eng</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="764" to="767" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">On information characteristics of Willshaw-like autoassociative memory</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Frolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Rachkovskij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Husek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Network World</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="141" to="157" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Multiport optical associative memory based on matrix-matrix equivalentors</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Magas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of SPIE</title>
				<meeting>SPIE<address><addrLine>Bellingham, WA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="volume">3055</biblScope>
			<biblScope unit="page" from="137" to="146" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">The concept models and implementations of multiport neural net associative memory for 2D patterns</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Nikolskyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Yatskovskaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Yatskovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Optical Pattern Recognition XXII</title>
				<editor>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Casasent</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T.-H</forename><surname>Chao</surname></persName>
		</editor>
		<meeting><address><addrLine>Bellingham, WA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">8055</biblScope>
			<biblScope unit="page">80550T</biblScope>
		</imprint>
	</monogr>
	<note>Proceedings of SPIE</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Design and simulation of a multiport neural network heteroassociative memory for optical pattern recognitions</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Lazarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Grabovlyak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of SPIE</title>
				<meeting>SPIE<address><addrLine>Bellingham, WA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">8398</biblScope>
			<biblScope unit="page">83980N</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">On a model of associative memory with huge storage capacity</title>
		<author>
			<persName><forename type="first">M</forename><surname>Demircigil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Heusel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lowe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Upgang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vermet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Stat. Phys</title>
		<imprint>
			<biblScope unit="volume">168</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="288" to="299" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">A comparative study of sparse associative memories</title>
		<author>
			<persName><forename type="first">V</forename><surname>Gripon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Heusel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lowe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vermet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Statistical Physics</title>
		<imprint>
			<biblScope unit="volume">164</biblScope>
			<biblScope unit="page" from="105" to="129" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Associative memories to accelerate approximate nearest neighbor search</title>
		<author>
			<persName><forename type="first">V</forename><surname>Gripon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lowe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vermet</surname></persName>
		</author>
		<idno>ArXiv:1611.05898. 10</idno>
		<imprint>
			<date type="published" when="2016-11">Nov 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Dense associative memory for pattern recognition</title>
		<author>
			<persName><forename type="first">D</forename><surname>Krotv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Hopfield</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Lee</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Sugiyama</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><forename type="middle">V</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1172" to="1180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Hopfield networks is all you need</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ramsauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schäfl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lehner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Seidl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Widrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gruber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Holzleitner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pavlovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Sandve</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Greiff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kreil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kopp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Klambauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Brandstetter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<idno>ArXiv, 2008.02217</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Hopfield networks is all you need</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ramsauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schäfl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lehner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Seidl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Widrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gruber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Holzleitner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pavlovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Sandve</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Greiff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kreil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kopp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Klambauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Brandstetter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">9th International Conference on Learning Representations (ICLR)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Associative memory using dictionary learning and expander decoding</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mazumdar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Rawat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. AAAI&apos;17</title>
				<meeting>AAAI&apos;17</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="267" to="273" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Hardware implementation of associative memories based on multiple-valued sparse clustered networks</title>
		<author>
			<persName><forename type="first">N</forename><surname>Onizawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jarollahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hanyu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Gross</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal on Emerging and Selected Topics in Circuits and Systems</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="13" to="24" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">The structures of Optical Neural Nets Based on New Matrix -Tensor Equivalental Models (MTEMS) and Results of Modeling</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Optical Memory and Neural Networks (Information Optics)</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="31" to="38" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Design and simulation of optoelectronic neuron equivalentors as hardware accelerators of self-learning equivalent convolutional neural structures (SLECNS)</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Lazarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Nikitovich</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.2316352</idno>
	</analytic>
	<monogr>
		<title level="m">Neuro-inspired Photonic Computing</title>
				<imprint>
			<date type="published" when="2018-05-21">May 21, 2018</date>
			<biblScope unit="volume">106890</biblScope>
		</imprint>
	</monogr>
	<note>Proc. SPIE 10689</note>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars</title>
		<author>
			<persName><forename type="first">A</forename><surname>Shafiee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM SIGARCH Computer Architecture N</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="14" to="26" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">In-memory PageRank accelerator with a cross-point array of resistive memories</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.1109/TED.2020.2966908</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Electron Devices</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1466" to="1470" />
			<date type="published" when="2020-04">Apr. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Associative Memory and Pattern Recognition in Neuromorphic Computing</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Cognitive and Developmental Systems</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="789" to="798" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Associative Memory with Neural Network Architectures for Image Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="2100" to="2113" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Design and simulation of array cells for image intensity transformation and coding used in mixed image processors and neural networks</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Krasilenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Lazarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Nikitovich</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.2322655</idno>
		<ptr target="https://doi.org/10.1117/12.2322655" />
	</analytic>
	<monogr>
		<title level="m">Optics and Photonics for Information Processing XII</title>
				<imprint>
			<biblScope unit="page">1075119</biblScope>
		</imprint>
	</monogr>
	<note>Proc. SPIE 10751</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
