<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modeling of Multiport Heteroassociative Memory (MBHM) on the Basis of Equivalence Models Implemented on Vector-Matrix Multipliers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Volodymyr Saiko</string-name>
          <email>vgsaiko@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir Krasilenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Illia Chikov</string-name>
          <email>ilya95chikov@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Diana Nikitovych</string-name>
          <email>diananikitovych@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Kyiv</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vinnytsia National Agrarian University</institution>
          ,
          <addr-line>st. Sonyachna, 3, Vinnytsia, 21008 Vinnytsia Oblast</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>76</fpage>
      <lpage>85</lpage>
      <abstract>
        <p>The work is devoted to consideration of issues related to associative processing of information for the purpose of hardware and software implementation of relevant mathematical, simulation and physical models of associative memory. On the basis of a review of known publications, including those in which equivalence models of associative and heteroassociative memory were considered and their advantages were shown, the need for further research of such models of AM or HAM and conducting simulation model experiments in order to find their effective implementations is substantiated. The results of simulation modeling of two implementations of multi-port heteroassociative memory (MHAM) based on vector-matrix multipliers and vector-matrix equivalents, which, as accelerators, are adapted to equivalence models, are given. The results of modeling the processes of hetero-associative letter tuple recognition, performed in Mathcad for two versions of the MHAM implementation, confirm their correct functioning, since the correct tuples of all 100 output letters pairwise associated with 100 input letters are formed by the models at the outputs-ports of the MHAM.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Multiport associative memory</kwd>
        <kwd>equivalence model</kwd>
        <kwd>vector-matrix multiplier</kwd>
        <kwd>heteroassociative memory</kwd>
        <kwd>neural network</kwd>
        <kwd>associative memory capacity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The last ten years have been marked by significant, almost radical, theoretical and practical
achievements in the fields of artificial intelligence research. The neurocomputer boom, which on the
next turn of the spiral of development, which began in the mid-80s, completely covered almost the
entire world in the early 90s, practically does not subside even today. However, it presents scientists
with more and more new problems and challenges. A significant increase in state support, expansion of
sources and volumes of financing projects and developments of artificial intelligence, neurocomputing
abroad led to the rapid formation of an entire research field. The practical results of this priority area
were: the creation and organization of mass production of neurocomputers, neuroaccelerators
(emulators) for PCs, neurochips and neural programs, which are widely used today in almost all areas
of human activity. There are many examples of the practical application of neurotechnologies in defense
and civil systems, which testify to the massive progress of neurotechnologies and robotic intelligent
systems. At the same time, it became clear that the complexity of the task of creating new advanced
intelligent systems was underestimated, since most promising applications require processing large
amounts of information and also on a real-time scale. For example, we note that the latest modern
models and architectures of convolutional neural networks and their modifications have more than a
hundred layers. That is why the trend of transition from software emulation to software-hardware
implementation of neural network models has emerged and is already partially implemented today. But
despite the dramatic increase in the number of neurochips and accelerators for neuromodels, hardware
implementations use either outdated and simplified models or traditional parallel computing systems.
New specific requirements for hardware implementations are explained by the characteristic features
of new architectural solutions when creating promising neural networks for certain applications. Such
neural networks are specific parallel computing structures with a large number of layers, they are
characterized by a large number of connections, an even larger number of parameters during training.
All this creates difficulties when processing data or signals in such models with a large dimension of
vectors, the number of variable model parameters, which are associated with the need to remember
large arrays and samples, store a significant number of filters, activation maps, etc. The above allows
us to conclude that effective implementations should be primarily focused on fast, high-performance
processing, especially for tasks focused on associative processing of large-scale and multispectral
images.</p>
      <p>Thus, the urgent need and relevance of further research into the processes of natural intelligence,
especially associative processing of information, including for the purpose of its hardware and
software implementation based on the latest relevant mathematical, simulation and physical models
of associative memory (AM), the most advanced recent biological and neurophysiological principles
in neurocybernetics, due to the expanded use of artificial intelligence in recent decades and, based
on it, modern intelligent decision support systems, remote monitoring systems, robotic systems for
recognizing and identifying images of the most diverse types and in the most diverse problem areas.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works and their analysis</title>
      <p>
        A number of neural models and networks implementing auto-associative and hetero-associative
memory (HAM) are known [
        <xref ref-type="bibr" rid="ref1 ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref2 ref20 ref21 ref22 ref3 ref4 ref5 ref6 ref7 ref8 ref9">1-25</xref>
        ]. But the analysis of a significant number of similar thematic
publications, including the above-mentioned [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref16 ref17 ref18 ref21 ref22 ref23 ref24 ref25 ref4 ref5 ref8 ref9">7, 8, 11, 12, 14-16, 19-21, 24-28</xref>
        ], and newer and more
recent works [
        <xref ref-type="bibr" rid="ref26 ref27 ref28 ref29 ref30 ref31 ref32 ref35 ref36 ref37 ref38 ref39">29-35, 38-42</xref>
        ] shows that the capacity of such AM models, which is determined by the
number of memorized and successfully associated (recognized) images, does not exceed
(0.140.60)*N, where N is the number of neurons in the AM model [
        <xref ref-type="bibr" rid="ref16 ref18 ref19 ref26 ref27">19, 21, 22, 29, 30</xref>
        ]. At the same time,
even more than 10-15 years ago, works [
        <xref ref-type="bibr" rid="ref23 ref24 ref25">26, 27, 28</xref>
        ] appeared in which the socalled equivalence
models (EMs) of neural networks (NNs) and AMs (HAMs) were proposed and studied by the
authors, and in them experimentally their significant advantages compared to other known models
and neural paradigms were confirmed, especially regarding their capacity and more convenient
mapping to hardware parallel processors [
        <xref ref-type="bibr" rid="ref35 ref36 ref37">38-40</xref>
        ].
      </p>
      <p>
        The capacity of such AM equivalence models (AM_EMs) can be at least 4-5 times higher than
N, although partial model experiments will allow us to assert that their capacity is even orders of
magnitude higher. In addition, AM_EMs allow a significantly greater power of interference that
distorts and changes input images, in which associative responses are formed qualitatively and
correctly. All this causes interest in further research of such AM/HAM_EMs in terms of finding their
effective implementations. In works [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref34">26-28, 37</xref>
        ], four equivalence models were studied and modeled,
as a result of which it was concluded that such models can be successfully applied to build not only
single-port, but also multi-port, more general, HAM for processing and recognizing patterns with
strong correlation and significant their damage by obstacles. Some possible hardware
implementations of such equivalence models, modified for processing 1-D or 2-D images, were
proposed and highlighted in works [
        <xref ref-type="bibr" rid="ref34 ref35">37, 38</xref>
        ]. But most of these proposals related only to single-port
AM implementations based on the use of optical systems with spatial and time-pulse integration [
        <xref ref-type="bibr" rid="ref34 ref35">37,
38</xref>
        ]. Such implementations are purely analog, and therefore do not allow, as a significant increase in
the dynamic range of the values of the processed signals, the accuracy of the calculation procedures,
and especially the dimensions of the images (vectors or matrices) stored in the AM.
      </p>
      <p>
        In addition, in the same works, based on the models, processing procedures and their description,
it was determined that the application of the most generalized equivalence models for the
implementation of the multiport HAM (MHAM) requires vector-matrix or matrix-tensor procedures
with equivalence (non-equivalence) operations, or specialized accelerator devices, socalled
vectormatrix equivalentors (VMEs). Since, as it was shown by the authors of the work [
        <xref ref-type="bibr" rid="ref34">37</xref>
        ], VMEs are
implemented on the basis of two vector-matrix multipliers (VMMs), this allows the use of
highperformance, high-speed vector-matrix specialized computers, matrix and systolic processors for the
construction of MHAM.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Formulation of Tasks</title>
      <p>Argumentation and conclusions from the given review and analysis of publications make it
possible to justify, as one of the urgent and important tasks, the need to develop, simulate and verify
such equivalence models and implementations of MHAM, which would structurally best correspond
to already existing parallel computers, for example, such as VMMs or VMEs, and the simplest, with
minimal additional nodes or components, could be implemented in the shortest possible time. A
secondary task is to evaluate the possible characteristics and indicators of such models.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The results of the study of EM MHAM on the basis of VMMs and VMEs</title>
      <p>
        The theoretical foundations of the design of MHAM based on EMs were developed and given in
works [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref34 ref35">26-28, 37, 38</xref>
        ]. And therefore, here, in order to verify such models and the quality of
functioning of the proposed MHAM based on them, we consider only aspects of the modeling of the
above-mentioned objects and analyze the obtained results. Let's note the advantages of such a
concept and EMs. Namely, the use of such generalized neurobiological operations (continuous logic)
[
        <xref ref-type="bibr" rid="ref34 ref35">37, 38</xref>
        ] as "equivalence" and "non-equivalence", "auto-equivalence" and "auto-non-equivalence"
for building models of neural networks and associative memory made it possible to use the dualism
of these generalized complementary operations and better recognize even contrast-inverted images,
use different types of non-linear transformations. The authors of the concept showed that such
equivalence models are more general (Hamming networks, Hopfield networks are their special cases)
and allow to describe and model in them along with excitatory synaptic connections and inhibitory
weighting coefficients, and moreover to use for this purpose not only bipolar coding of signals and
data, as well as unipolar. And this simplifies the possible options for implementing models, including
by reducing the range of power sources. But in these works, attention was mainly paid to the AM
models themselves and only some of their implementations, and the results of machine simulation
were not reflected [
        <xref ref-type="bibr" rid="ref34 ref35">37, 38</xref>
        ], precisely from the point of view of the set goal. Therefore, in this work,
we consider AM/HAM equivalence models, namely aspects of their implementations, in order to
show their adequacy and advantages. And with the obtained results of machine computer simulation,
try to involve element base developers in the new architectural solutions of multi-port associative
memory proposed by us based on these new models and determine the system requirements for the
element base of promising implementations.
      </p>
      <p>For simulation modeling and conducting experiments in Mathcad, we created a training sample
of mutually connected (associated with each other) different images in the form of vectors, which
are elements of some set. For better visibility, visualization and accelerated perception and
comparison, in one of the experiments, code vectors of ASC11 letter symbols were selected as
images, while mostly letters were used. Each code vector of a letter, whether input or from the
training set, is a quadruple repetition of an eight-bit binary Gray code (byte), the numerical equivalent
of which is a number from the range 0 to 255, which is the character code. In the procedures for
converting traditional binary codes into binary Gray codes, we used models based on equivalence
(non-equivalence) operations vectorized in Mathcad. And we used the four-fold repetition of bytes
of code vectors to increase the dimensionality of image vectors and for the ability to check the
immunity to interference during pattern recognition in MHAM and in the conditions of an extended
range of interference power, which is proportional to the number of changed bits in the code vectors.</p>
      <p>The procedure for entering 128 characters (a fragment with a part of the various letters or symbols
entered) is shown in figure 1 (a). Each character is coded by a byte in accordance with the accepted
coding system. The same fragment shows the procedure for entering 100 English letters or symbols
that will correspond to 100 input ports, each of which is supplied with a code vector of the
corresponding letter or symbol. To form pairs of hetero-associated images, we matched each letter
from the created 100-letter set with the next letter using a cyclic shift. figure 2 shows a 100×32
element binary two-gradation image (INPX matrix) and a contrast inverse image (INPXN matrix)
corresponding to 100 inputs of the MHAM. Figure 2 also shows in the form of the same two binary
matrices TX and TXN with dimensions of 32×128 elements 128 vectors of the training sample, into
which 128 different symbols and letters from the PC keyboard are entered after their transformation
into 32-component bit vectors.</p>
      <p>Using the formulas given at the bottom of figure 2, we formed a set of 128 associated training
vectors in the form of TY and TYN matrices in accordance with each of the 128 training vectors (TX
and TXN matrix) (they are shown transposed in figure 3). That is, the first letter from the TX set was
matched with the second letter, each subsequent letter was matched with the next letter, and the last
letter was matched with the first letter. In figure 3. (left) shows the formulas that we used to simulate
the procedures for finding the necessary matrices (HN and HNN) in the first step using a set of two
arrays of vector-matrix multipliers (VMMs).</p>
      <p>These matrices are the normalized equivalences and non-equivalences of the compared vectors
and are new metrics that complementarily reflect "similarity" and "dissimilarity", essentially
"distance". Fragments of windows with the results of modeling procedures for calculating the output
matrices (OUTY and OUTYN) of normalized equivalences and non-equivalences in the second step
are also shown there. In addition, in figure 3 shows the formulas for the nonlinear transformation of
signals of neurons of the hidden layer with the nonlinearity coefficient γ, which correspond to the
component-wise nonlinear transformations of the NHN and NHNN matrices.</p>
      <p>As can be seen from figure 4, the use of operations and vectorized transformations that reproduce
the threshold component-wise processing of sub-vectors and the formation of an array of output
feedback vectors at the output of the BGAP with their subsequent transformation into output
letterssymbols, made it possible to explain the process and form a tuple of output images.</p>
      <p>The input and generated output letter tuples are shown in figure 4 below. Thus, the obtained
simulation results confirm the correct functioning of the MHAM, as a tuple of 100 output letters paired
with 100 input letters is formed on the outputs-ports of the MHAM. As you can see, all 100 letters at
the output of the MHAM are recognized in accordance with the hetero-associations formed by the
training sample, both when used for modeling the MHAM model based on VME (the results are shown
in figure 5), and when using the MHAM model based on VMM (results are shown in figure 6). The
formed tuples of symbols-letters, which are also displayed in figure 5, testify to the successful
reproduction of all associated pairs, namely the formation for each letter from the input tuple of the
corresponding next one. Additional experiments showed that the modeled possible options for the
implementation of the MPHAM allow damage to the code vectors by interference within the
permissible limits (30-35%), which does not cause violations of the correct functioning of the MPHAM.
These additional aspects of the obtained results will be reported in the report.</p>
      <p>In addition, the results of the performed simulations show that for the correct functioning and
improvement of the functional capabilities and indicators (capacity!) of the MNAM, especially in
situations of association of noisy images and with a strong correlation between images, it is desirable
to use equivalence models (EMs), which not only better describe, taking into account their duality, the
processes of image comparison and determination of specific normalized metrics, but also more easily
map onto parallel processing structures such as matrix-matrix or matrix-tensor multipliers. But
additional components, without which the advantages of such models cannot be achieved, should be
arrays of elements that implement component-wise nonlinear transformations of signals (intensities of
image pixels or numerical values of matrix elements).</p>
      <p>
        Moreover, the latter are basic operations in the most promising paradigms of convolutional neural
network with adaptive selflearning [
        <xref ref-type="bibr" rid="ref35">38</xref>
        ]. However, a review of the mathematical operators implemented
by neurons allows us to draw the following conclusion. Almost all models [
        <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref4 ref6">7, 9, 17-23</xref>
        ], with rare
exceptions, use mathematical models of neurons, which are reduced to the presence of two main
mathematical components-operators: the first component calculates a function from two vectors (but
most often it calculates a simple weighted sum), and the second component corresponds to a nonlinear
transformation of the original value the first component in the output signal. Many works are devoted
to the design of hardware devices that implement neuron activation functions, but they do not consider
the design of autoequivalence transformation functions [
        <xref ref-type="bibr" rid="ref35">38</xref>
        ] for EM [
        <xref ref-type="bibr" rid="ref10 ref11 ref19 ref9">12-14, 22</xref>
        ]. It is desirable to design
nodes that would implement the entire set of the most common arbitrary types of nonlinear
transformations [
        <xref ref-type="bibr" rid="ref40">43</xref>
        ]. Due to limitations, we do not provide links here. We partially solved the issue of
the simplest approximations of autoequivalence functions (three-component approximation with a
floating threshold) and modeled the developed cell schemes. The basic cell of this approximation
consisted of only 18 - 20 transistors and allowed to work with a conversion time from 1 to 2.5 μs. The
diagram of the cell and the window with the results of its simulation are shown in figure 7.
      </p>
      <p>
        At the same time, the development of general theoretical approaches to the construction of
correctors with any types of nonlinear transformations can be an interesting direction of our further
research [
        <xref ref-type="bibr" rid="ref40">43</xref>
        ]. The simulation results are also shown in figure. 8. Sets of associated images (above) and
sets of input images damaged by interference in the form of symbols (letters) and corresponding
heteroassociated output images formed by the model below) testify to the correct functioning of the models,
see figure 8. Additional experiments showed that the modeled possible variants of the MHAM
implementation allow the code vectors to be damaged by interference within acceptable limits, which
does not cause violations of the correct functioning of the MHAM. These additional aspects of the
obtained results will be reported in the report, and they are partially shown in Fig. 8.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>Within the framework of this work, the set goal was achieved, namely, the principles of
implementing multi-port heteroassociative memory on the basis of equivalence models and arrays of
vector-matrix multipliers, which additionally contain some simple components of element-by-element
special nonlinear transformations for simulating activation functions, were described and modeled. At
the same time, some aspects important for the implementation of the proposed models and their
circuittechnical solutions and the further expansion of the spheres of their use remained unexplored. This
includes the implementation of physical models and their testing, the measurement of characteristics
and indicators for the purpose of their comparative analysis with other concepts.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>As a result of the development and modeling of MHAMs operating on the basis of equivalence
models, the possibilities of MHAM implementations based on such hardware and software accelerators
with parallel processing as vector-matrix multipliers and vector-matrix equivalents (essentially two
multipliers) were confirmed, which in addition to their parallel execution of linear-algebraic
procedures-operations would be endowed with the possibility of their parallel execution of
componentwise nonlinear transformations.
7. References
[1] “Neural Network Design”, by Martin T. Hagan, Howard B. Demuth and Mark Hudson Beale,
(1996) , PWS Publ. Company, Chapter 13, page 13-1 to 13-37.
[2] “An Introduction to Neural Networks”, by James A. Anderson, (1997), MIT Press, Chapter 6-7,
page 143-208.
[3] S.O. Haykin “Neural Networks and Learning Machines (3rd Edition),” Prentice Hall, 2009, 906 p.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Grossberg</surname>
          </string-name>
          “
          <article-title>Nonlinear neural networks: Principles, mechanisms</article-title>
          , and architectures,” Neural Networks,
          <year>1988</year>
          , Vol.
          <volume>1</volume>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Widrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schäfl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pavlovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ramsauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gruber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Holzleitner</surname>
          </string-name>
          , J. Brandstetter, ´
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Sandve</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Greiff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Klambauer</surname>
          </string-name>
          .
          <article-title>Modern Hopfield networks and attention for immune repertoire classification</article-title>
          . In H. Larochelle,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ranzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hadsell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Balcan</surname>
          </string-name>
          , and H. Lin, editors,
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>33</volume>
          , pages
          <fpage>18832</fpage>
          -
          <lpage>18845</lpage>
          . Curran Associates, Inc.,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Si</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , J.,
          <year>2019</year>
          .
          <article-title>A review of recurrent neural networks: LSTM cells and network architectures</article-title>
          .
          <source>Neural Comput</source>
          .
          <volume>31</volume>
          ,
          <fpage>1235</fpage>
          -
          <lpage>1270</lpage>
          . https://doi.org/ 10.1162/neco_a_
          <fpage>01199</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Hopfield</surname>
            ,
            <given-names>J.J.:</given-names>
          </string-name>
          <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>
          .
          <source>Proc. Natl. Acad. Sci</source>
          .
          <volume>79</volume>
          ,
          <fpage>2554</fpage>
          -
          <lpage>2558</lpage>
          (
          <year>1982</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Tait</surname>
            ,
            <given-names>A. N.</given-names>
          </string-name>
          <article-title>Silicon photonic neural networks</article-title>
          .
          <source>PhD thesis</source>
          , Princeton University, Princeton,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Nahmias</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>A</article-title>
          . et al.
          <article-title>Photonic multiply-accumulate operations for neural networks</article-title>
          .
          <source>IEEE J. Sel. Top. Quantum Electron</source>
          .
          <volume>26</volume>
          ,
          <issue>7701518</issue>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.A.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Grossberg</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.H.</surname>
          </string-name>
          <article-title>Reynolds “ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network</article-title>
          ,
          <source>” Neural Networks</source>
          ,
          <year>1991</year>
          , Vol.
          <volume>4</volume>
          , pp.
          <fpage>565</fpage>
          -
          <lpage>588</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Aiyer</surname>
            ,
            <given-names>S.V.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niranjan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Fallside</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>1990</year>
          )
          <article-title>A theoretical investigation into the performance of the Hopfield model</article-title>
          .
          <source>IEEE Transactions on Neural Networks</source>
          ,
          <volume>1</volume>
          ,
          <fpage>204</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>K.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abramson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Duke</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>1999</year>
          )
          <article-title>Efficient timetabling formulations for Hopfield neural networks</article-title>
          .
          <source>In: C. Dagli et a)</source>
          ,
          <source>(eds.)</source>
          ,
          <source>Smart Engineering System Design: Neural Networks</source>
          , Fuzzy Logic,
          <string-name>
            <surname>Evolutionary</surname>
            <given-names>Programming</given-names>
          </string-name>
          ,
          <source>Data Mining, and Complex Systems</source>
          , vol.
          <volume>9</volume>
          . ASME Press, pp.
          <fpage>1027</fpage>
          -
          <lpage>1032</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Seidl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Renz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Dyubankova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Neves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Verhoeven</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Wegner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Klambauer</surname>
          </string-name>
          .
          <article-title>Modern Hopfield networks for few- and zero-shot reaction prediction</article-title>
          .
          <source>ArXiv</source>
          ,
          <volume>2104</volume>
          .03279,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [14]
          <string-name>
            <surname>T. Kohonen</surname>
          </string-name>
          , (
          <year>1988</year>
          )
          <article-title>Self-Organization and Associative Memory</article-title>
          . Springer, Berlin.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kosko</surname>
          </string-name>
          “
          <article-title>Constructing an associative memory</article-title>
          ,” Byte,
          <year>1987</year>
          , Vol.
          <volume>12</volume>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kosko</surname>
          </string-name>
          “
          <article-title>Bi-directional associative memories</article-title>
          ,
          <source>” IEEE Transactions on Systems, Man and Cybernetics</source>
          .
          <year>1988</year>
          , Vol.
          <volume>18</volume>
          , No 1, pp.
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Johnson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>“A Review of Associative Memory Models for Neural Networks</article-title>
          .
          <source>” IEEE Transactions on Neural Networks and Learning Systems</source>
          ,
          <volume>34</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1234</fpage>
          -
          <lpage>1246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>“Bidirectional Associative Memory for Pattern Recognition in Deep Learning</article-title>
          .
          <source>” IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>43</volume>
          (
          <issue>9</issue>
          ),
          <fpage>2100</fpage>
          -
          <lpage>2113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Horn</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Usher</surname>
          </string-name>
          .
          <article-title>Capacities of multiconnected memory models</article-title>
          .
          <source>J. Phys. France</source>
          ,
          <volume>49</volume>
          (
          <issue>3</issue>
          ):
          <fpage>389</fpage>
          -
          <lpage>395</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Maxwell</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. Lee</given-names>
            <surname>Giles</surname>
          </string-name>
          .
          <article-title>High order correlation model for associative memory</article-title>
          .
          <source>AIP Conference Proceedings</source>
          ,
          <volume>151</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>86</fpage>
          -
          <lpage>99</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Caputo</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Niemann</surname>
          </string-name>
          .
          <article-title>Storage capacity of kernel associative memories</article-title>
          .
          <source>In Proceedings of the International Conference on Artificial Neural Networks (ICANN)</source>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>56</lpage>
          , Berlin, Heidelberg,
          <year>2002</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Mete</surname>
            <given-names>Demircigil</given-names>
          </string-name>
          , Judith Heusel, Matthias Lowe, Sven Upgang, and
          <string-name>
            <given-names>Franck</given-names>
            <surname>Vermet</surname>
          </string-name>
          .
          <article-title>On a Model of Associative Memory with Huge Storage Capacity</article-title>
          .
          <source>Journal of Statistical Physics</source>
          ,
          <volume>168</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>288</fpage>
          -
          <lpage>299</lpage>
          ,
          <year>July 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Qun</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Supratik</given-names>
            <surname>Mukhopadhyay</surname>
          </string-name>
          .
          <article-title>Unsupervised learning using pretrained cnn and associative memory bank</article-title>
          .
          <source>In 2018 International Joint Conference on Neural Networks (IJCNN)</source>
          , pp.
          <fpage>01</fpage>
          -
          <lpage>08</lpage>
          . IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Kiselyov</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kulakov</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikaelian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shkitin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          “
          <article-title>Optical associative memory for high-order correlation patterns”,</article-title>
          <string-name>
            <surname>Opt. Eng.</surname>
          </string-name>
          , Vol.
          <volume>31</volume>
          , № 4, pp.
          <fpage>764</fpage>
          -
          <lpage>767</lpage>
          (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Frolov</surname>
            <given-names>A.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rachkovskij</surname>
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Husek</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>On information characteristics of Willshaw-like autoassociative memory</article-title>
          .
          <source>Neural Network World</source>
          .
          <year>2002</year>
          . Vol.
          <volume>12</volume>
          , N. 2. P.
          <volume>141</volume>
          -
          <fpage>157</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Magas</surname>
          </string-name>
          , “
          <article-title>Multiport optical associative memory based on matrix-matrix equivalentors</article-title>
          ,”
          <source>in Proceedings of SPIE</source>
          Vol.
          <volume>3055</volume>
          ,
          <string-name>
            <surname>Bellingham</surname>
          </string-name>
          , WA,
          <year>1997</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. I. Nikolskyy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Yatskovskaya</surname>
          </string-name>
          , and
          <string-name>
            <surname>V. I. Yatskovsky</surname>
          </string-name>
          , “
          <article-title>The concept models and implementations of multiport neural net associative memory for 2D patterns,” in Optical Pattern Recognition XXII</article-title>
          ,
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Casasent</surname>
          </string-name>
          and T.-H. Chao, Eds.,
          <source>Proceedings of SPIE</source>
          Vol.
          <volume>8055</volume>
          ,
          <string-name>
            <surname>Bellingham</surname>
          </string-name>
          , WA,
          <year>2011</year>
          , p.
          <fpage>80550T</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Lazarev</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Grabovlyak</surname>
          </string-name>
          , “
          <article-title>Design and simulation of a multiport neural network heteroassociative memory for optical pattern recognitions</article-title>
          ,”
          <source>in Proceedings of SPIE</source>
          Vol.
          <volume>8398</volume>
          ,
          <string-name>
            <surname>Bellingham</surname>
          </string-name>
          , WA,
          <year>2012</year>
          , p.
          <fpage>83980N</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Demircigil</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heusel</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lowe</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Upgang</surname>
            <given-names>S. Vermet F.</given-names>
          </string-name>
          <string-name>
            <surname>On</surname>
          </string-name>
          <article-title>a model of associative memory with huge storage capacity</article-title>
          .
          <source>J. Stat. Phys</source>
          .
          <year>2017</year>
          . Vol.
          <volume>168</volume>
          , N 2. P.
          <volume>288</volume>
          -
          <fpage>299</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Gripon</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heusel</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lowe</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vermet</surname>
            <given-names>F.</given-names>
          </string-name>
          <article-title>A comparative study of sparse associative memories</article-title>
          .
          <source>Journal of Statistical Physics</source>
          .
          <year>2016</year>
          . Vol.
          <volume>164</volume>
          . P.
          <volume>105</volume>
          -
          <fpage>129</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Gripon</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lowe</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vermet</surname>
            <given-names>F</given-names>
          </string-name>
          .
          <article-title>Associative memories to accelerate approximate nearest neighbor search</article-title>
          .
          <source>ArXiv:1611.05898. 10 Nov</source>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>D.</given-names>
            <surname>Krotv</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Hopfield</surname>
          </string-name>
          .
          <article-title>Dense associative memory for pattern recognition</article-title>
          . In D. D.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sugiyama</surname>
            ,
            <given-names>U. V.</given-names>
          </string-name>
          <string-name>
            <surname>Luxburg</surname>
            ,
            <given-names>I. Guyon</given-names>
          </string-name>
          , and R. Garnett, editors,
          <source>Advances in Neural Information Processing Systems</source>
          , pages
          <fpage>1172</fpage>
          -
          <lpage>1180</lpage>
          . Curran Associates, Inc.,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ramsauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schäfl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Seidl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Widrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gruber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Holzleitner</surname>
          </string-name>
          , M. Pavlovic,`
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Sandve</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Greiff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kreil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopp</surname>
          </string-name>
          , G. Klambauer,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brandstetter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          .
          <article-title>Hopfield networks is all you need</article-title>
          .
          <source>ArXiv</source>
          ,
          <year>2008</year>
          .
          <volume>02217</volume>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ramsauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schäfl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Seidl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Widrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gruber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Holzleitner</surname>
          </string-name>
          , M. Pavlovic,`
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Sandve</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Greiff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kreil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopp</surname>
          </string-name>
          , G. Klambauer,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brandstetter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          .
          <article-title>Hopfield networks is all you need</article-title>
          .
          <source>In 9th International Conference on Learning Representations (ICLR)</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Mazumdar</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rawat</surname>
            <given-names>A.S.</given-names>
          </string-name>
          <article-title>Associative memory using dictionary learning and expander decoding</article-title>
          .
          <source>Proc. AAAI'17</source>
          .
          <year>2017</year>
          . P.
          <volume>267</volume>
          -
          <fpage>273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Onizawa</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jarollahi</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanyu</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gross</surname>
            <given-names>W.J.</given-names>
          </string-name>
          <article-title>Hardware implementation of associative memories based on multiple-valued sparse clustered networks</article-title>
          .
          <source>IEEE Journal on Emerging and Selected Topics in Circuits and Systems</source>
          .
          <year>2016</year>
          . Vol.
          <volume>6</volume>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <year>1</year>
          . P.
          <volume>13</volume>
          -
          <fpage>24</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          , “
          <article-title>The structures of Optical Neural Nets Based on New Matrix - Tensor Equivalental Models (MTEMS) and Results of Modeling,” Optical Memory and Neural Networks (Information Optics)</article-title>
          , vol.
          <volume>19</volume>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Lazarev</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. V.</given-names>
            <surname>Nikitovich</surname>
          </string-name>
          , “
          <article-title>Design and simulation of optoelectronic neuron equivalentors as hardware accelerators of self-learning equivalent convolutional neural structures (SLECNS),”</article-title>
          <source>in Proc. SPIE 10689</source>
          ,
          <string-name>
            <surname>Neuro-inspired Photonic</surname>
            <given-names>Computing</given-names>
          </string-name>
          , 106890C, May 21,
          <year>2018</year>
          , doi: 10.1117/12.2316352.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [39]
          <string-name>
            <surname>Shafiee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.
          <article-title>ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars</article-title>
          .
          <source>ACM SIGARCH Computer Architecture N. 44</source>
          ,
          <fpage>14</fpage>
          -
          <lpage>26</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          et al.,
          <article-title>“In-memory PageRank accelerator with a cross-point array of resistive memories,”</article-title>
          <source>IEEE Trans. Electron Devices</source>
          , vol.
          <volume>67</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>1466</fpage>
          -
          <lpage>1470</lpage>
          , Apr.
          <year>2020</year>
          , doi: 10.1109/TED.
          <year>2020</year>
          .
          <volume>2966908</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , et al. (
          <year>2022</year>
          ).
          <article-title>"Associative Memory and Pattern Recognition in Neuromorphic Computing."</article-title>
          <source>IEEE Transactions on Cognitive and Developmental Systems</source>
          ,
          <volume>14</volume>
          (
          <issue>3</issue>
          ),
          <fpage>789</fpage>
          -
          <lpage>798</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>"Associative Memory with Neural Network Architectures for Image Recognition."</article-title>
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>43</volume>
          (
          <issue>9</issue>
          ),
          <fpage>2100</fpage>
          -
          <lpage>2113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Krasilenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Lazarev</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. V.</given-names>
            <surname>Nikitovich</surname>
          </string-name>
          , “
          <article-title>Design and simulation of array cells for image intensity transformation and coding used in mixed image processors and neural networks</article-title>
          ,
          <source>” Proc. SPIE 10751, Optics and Photonics for Information Processing XII</source>
          ,
          <volume>1075119</volume>
          https://doi.org/10.1117/12.2322655.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>