<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using the Elman neural network as an identity map in defect detection task</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>ymyr Kh</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>tskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Oles Honchar Dnipro National University</institution>
          ,
          <addr-line>Gagarin Avenue, 72, Dnipro, 49010</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This article presents the results of a study of the Elman neural network to identify defects (delamination) in composite materials and estimate their area. The registration zone is a square matrix consisting of 100 elements. For training, we used images with different defect spot area, moving along the matrix and additively mixed with Gaussian noise of various intensity. The network structure optimal by the criterion of testing error/training time is determined. Testing showed that when the defect spot area changes by more than 8 times and with a noise level of 27%, the test error reaches 6%. This error is significantly decreased when narrowing the range of the defect area changing and reducing the noise intensity. A block diagram of the corresponding intelligent system is proposed.</p>
      </abstract>
      <kwd-group>
        <kwd>delamination</kwd>
        <kwd>Elman neural network</kwd>
        <kwd>images</kwd>
        <kwd>distortions</kwd>
        <kwd>noise</kwd>
        <kwd>identity map</kwd>
        <kwd>intelligent system</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        At present in a number of industries composite materials are widely used due to their
high mechanical and thermodynamic properties at low density [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Most of them are
laminated materials, so the most common defect in them is delamination. To identify
and evaluate parameters of such defects in composites, ultrasound diagnostics of
products are most often used. With two-sided access it is reasonable to use a shadow
testing method with the radiating and sensor elements are located on opposite sides,
for example, the product wall. Delamination of the material is a barrier in the path of
ultrasound.
      </p>
      <p>A composite material is heterogeneous in structure. The spatial arrangement of
reinforcing elements, such as fiber boundless, is often not ideal. Furthermore, in the
balk of the material and on its surface, especially at phase boundaries, there are
accumulations of pores and various kinds of microdefects. Cracking of the composite
matrix and fibers breaks are observed. The concentration of microdefects is especially
high at the boundaries of delamination, i.e. in places where the material begins to
disintegrate. All these structural imperfections distort the defect image fixed by the
sensor element. Therefore, to identify the defect and estimate the true area of the
delamination, it is necessary to process its noisy image.</p>
      <p>The perspective of using neural networks for these purposes is based on the
possibility of parallelizing the processing of information and their ability to learn, i.e. make
generalizations. This makes it possible to identify images that have not been
encountered in the learning process. In the problem of classification of defects images
considered by us, the processing of information by a neural network in a number of key
positions is close to the methods of non-parametric statistical training.</p>
      <p>
        Recurrent neural network differs from the classical network of feed-forward by the
presence of feedback loops from hidden or output neurons. Their presence has a direct
impact on the ability of such networks to learn [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The Elman network contains
recurrent links of hidden neurons with a layer of context units consisting of unit-delay
elements. These elements save the outputs of hidden neurons for one-time step and
then transmit them to the inputs of the neurons. This leads to the non-linear dynamic
behavior of the network and allows implement a learning process that develops over
time. Promising is the integration of a neural network with other information blocks
into a single intellectual decision-making system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Problem statement</title>
      <p>The article aims to create an intelligent system with a core in the form of an Elman
neural network to identify and estimate the area of distorted defect images moving
within the control zone. In the process of solving this problem, we want to study the
ability of recurrent neural networks to identify dynamic noisy images of various sizes
defects in the process of scanning by the measuring transducer the product surface.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Literature review</title>
      <p>Over the years, neural network image processing methods have been successfully
used in the field of non-destructive testing, as well as in a number of other
engineering fields.</p>
      <p>
        In the article [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the processes of teaching and testing of a back-propagation neural
network to identify distorted defect signals have been analyzed. Using numerical
simulation, including a network clustering mechanism, the authors found that
clustering increases the probability of signal recognition.
      </p>
      <p>
        The article [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presents a deep-learning mechanism for classifying
computergenerated images and photographic images. The proposed method accounts for a
convolutional layer capable of automatically learning a correlation between
neighboring pixels. The layer is designed to subdue the image’s content and robustly learn the
sensor pattern noise features as well as the statistical properties of images.
      </p>
      <p>
        The authors of [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] analyze new trends in the digitization of complex engineering
drawings. It includes symbol detection and symbol classification. The article [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
presents a symbol recognition method which applied an interactive learning strategy
based on the recurrent training of a Hopfield neural network. This method was
designed to find the most common symbols in the drawing, which were characterized by
having a prototype pattern. The method recursively learns the features of the samples
to the detection and classification accuracy. However, the method can only identify
symbols that are formed by a “prototype pattern”, which means that irregular shapes
cannot be addressed through this framework.
      </p>
      <p>
        Elman network [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ] has been successfully applied in many fields, regarding
prediction, modeling, pattern recognition, and control. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] Elman network is trained
to predict the future value of the residual time series. Finally, the network is used to
capture the relationship among the predicted value of the original time series and
residual time series.
      </p>
      <p>
        The standard back-propagation algorithm used in Elman neural network is called
Elman’s back-propagation algorithm. To increase the convergence, speed a new
learning rate scheme is proposed [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Multilayer perceptron network (MLP) and Elman neural network were compared
in four different time series prediction tasks [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Time series include load in an
electric network series, which has a low-frequency trend, fluctuations in a far-infrared
laser series, numerically generated series, and behavior of sunspots series. The time
series is said to be stationary. MATLAB neural network toolbox training functions
were used for training MLP and Elman network. Results show that the efficiency of
the learning algorithm is a more important factor than the network model used. Elman
network models load in an electric network series better than MLP. This network
predicts the slope of the trend of the testing data more accurately than the MLP
network. In other prediction tasks, it performs similarly to the MLP network.
      </p>
      <p>
        One of the major problems facing researchers in the recurrent networks is the
selection of the hidden neuron’s numbers in neural network layers [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <sec id="sec-3-1">
        <title>Jinchuan and Xinzhe [14] investigated a formula: N h  (Nin  N p )  L , where</title>
        <p>L is the number of hidden layers, Nin is the number of input neurons and N p is the
number of input images.</p>
        <p>
          Kallan R. [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] proposed to calculate the number of neurons in the hidden layer
according to the formula: N h  2iK  1 , where K is the number of network
inputs, i  1, 2,3... .
        </p>
        <p>
          Generalization performance varies over time as the network adapts during training.
The necessary numbers of hidden neurons approximated in the hidden layer using a
multilayer perceptron were found by Trenn [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The key points are simplicity,
scalability, and adaptability. The number of hidden neurons is N h  n  (n0 1) , where n
2
is the number of inputs and n0 is the number of outputs.
        </p>
        <p>
          Shibata and Ikeda [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] investigated the effect of learning stability and hidden
neurons in neural networks. The simulation results show that the hidden output
connection weight becomes small as a number of hidden neurons N h becomes large. The
formula of hidden nodes is N h 
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Ni  N0 , where Ni is the number of input neurons</title>
        <p>and N0 is the number of output neurons. A tradeoff is formed that if the number of
hidden neurons becomes too large, the output of neurons becomes unstable, and if the
number of hidden neurons becomes too small the hidden neuron’s output becomes
unstable again.</p>
        <p>
          Hunter et al. [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] developed a method in proper neural network architectures. The
three networks: MLP, bridged MLP and fully connected cascades network are used.
The implemented formula: as follows, N h  N  1 for the MLP network,
N h  2N  1 for bridged MLP network, were N is the number of input neurons.
        </p>
        <p>
          The hidden layer nodes number was determined by using the empirical formula
[
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] N h  2Ni  1 , where N h is the maximum number of nodes in the hidden layer
and Ni is the number of inputs.
        </p>
        <p>
          In accordance with [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], the optimal number of hidden nodes is found out by trial –
and – error approach.
4
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Neural network design method</title>
      <p>For preliminary processing of information received from the sensor element of the
ultrasonic transducer, we used the Elman network, which is an example of a feedback
network. The feedback has a profound impact on the learning capacity and the
challenge in the design of a neural network is the fixation of hidden neurons with minimal
error. The accuracy of training is determined by the parameters: neural network
architecture, number of hidden neurons in hidden layers, activation function, inputs
number, and weights updating procedure.</p>
      <p>In our case, the control zone of the ultrasonic transducer is displayed in the form of
a square matrix of 100 100 elements. The Elman network, which performs image
processing, has 100 inputs and 100 outputs. In the process of learning the network,
we used 8967 images. The results of calculations of the number of neurons in the
hidden layer are shown in Table 1.
where yk and x j represent the output of the context state neuron and input neuron,
respectively;  ik and ij represent their corresponding weights; f [] is the sigmoid
transfer function (1).</p>
      <p>Network training was conducted in sequential mode. This mode requires less
internal memory for each synaptic link and is preferable for real-time processes. The
initialization procedure was carried out without preliminary using the priory
information. Herewith, the initial network parameters were set using a generator of uniformly
distributed random numbers.</p>
      <p>
        The number of examples for training the network should not be too large, as this
can lead to overtraining of the network. In this case, the learning process can end only
memorizing learning data. Overtraining network loses the ability to generalize. In
accordance with Widrow’s rule of thumb, the size of the training set necessary for a
good generalization should be of the order of N  O W  , where W is the total
  0 
number of free parameters, i.e. synaptic weights and thresholds,  0 is the permissible
classification error. Let nin be the number of input nodes of the network, and nh be
the number of neurons in the hidden layer. If the product nin  nh corresponds to the
total number of free parameters W network [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], then N must have the order nin  nh .
 0
If we take nin  100 , nh  200 and  0  0.02 , then N  106 . We used 8967 clean
and noisy images for training, but each of them is repeated 100 times within the same
epoch. Thus, the actually used volume of the training set is 896700, which is close
enough to N .
      </p>
      <p>Hidden neurons of the Elman network, learned by the method of back propagation,
play the role of feature detectors. Therefore, it is advisable to use this network as a
replicator or identity map. At the same time, the input and output layers of our
network have the same size nin  nout  100 neurons. The network is fully connected.</p>
      <p>A sigmoidal logistic function is used as the transfer function of neurons of the
hidden layer. One of the important advantages for us of this function is the high enough
speed of calculating the derivative. The network training time largely depends on this.
In the process of network learning, we used the conjugate – gradient method. This
was due to the need to increase the sufficiently low rate of convergence of the
quickest descent method and avoid computational difficulties caused by operations with the
Hessian matrix in Newton’s method. The computational complexity of the
quasiNewton methods is estimated as OW 2  . In contrast, the computational complexity of
the conjugate – gradient method is estimated as OW  . Thus, in our case when</p>
      <sec id="sec-4-1">
        <title>W  2 104 , conjugate – gradient method is more preferable.</title>
        <p>5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Flaw detection process simulation</title>
      <p>The image of the defect (delamination) has the form of a continuous black spot, which
is formed by the n matrix cells with a color level L  5 adjacent to each other. This
spot is surrounded by a layer of gray cells with L  4 , the next layer of cells
corresponds to L  3 and so on to L  1 (white color). White color characterizes the areas
of the solid defect-free material that are completely transparent for an ultrasonic
signal.</p>
      <p>The defect image (pattern) No.1 is a black square with the size 2  2 cells having
L  5 . The spot area of the defect is S1  4 . This is the target image for the first class
of images. Moving this image on a matrix of 10 10 cells is carried out line by line.
Checkpoint of this pattern, like all subsequent ones, is its lower left cell. In the
process of moving, the coordinates of this cell for the first row j  1 change from i  1 to
i  9 , then the same is repeated for j  2 and so on to j  9 . Herewith the black
spot is not distorted at the edges of the matrix. The number of images used for
network training in this case is N1c  81 . Here, the index c means that the displayed
images are clean, i.e. not containing noise.</p>
      <p>In patterns No.2 and No.3, the left and right upper cells of the black square (No.1)
are replaced by gray cells with L  4 . The number of training images does not change
N 2c  N3c  81 .</p>
      <p>Image No.4 is a black square with the size 3 3 cells having L  5 . This is the
target image for the second class of images. During the scanning process, the checkpoint
of this square was moved along the coordinates i  1...8 , j  1...8 . The number of
patterns used for network training in this case is N 4c  64 . Patterns No.5, 6 and 7 are
formed similarly to No.2 and 3. For patterns No. 8 and 9 in the upper layer of the
black square, two cells on the left and two on the right are replaced by grey ones.</p>
      <p>Image No.10 is a black square with the size 4  4 cells having L  5 . This is the
target image for the third class of images. The scanning process is similar to that
described above. The number of training images is N10c  49 . Patterns No.11, 12 and
13 differ from the target pattern No.10 by the alternate replacement of each of the
black corner cells with the exception of the checkpoint on the gray. Patterns No.14
and 15 are formed similarly to patterns No.8 and 9 only for the target image No.10.</p>
      <p>Image No.16 is a black square of 5  5 cells with L  5 . This is the target image
for the fifth class of images. The scan coordinates for the checkpoint are i  1...6 ,
j  1...6 . The number of training images is N16c  36 . Patterns No.17, 18 and 19 are
formed similarly to patterns No.11, 12 and 13. For images No. 20, 21, 22, 23, 24 and
25 a pair of cells of the outer black layer, shifting along the periphery of the spot, is
replaced by gray ones.</p>
      <p>The distortion coefficient of the target image is determined by the ratio of the total
area of the cells with the changed color to the area of the corresponding black spot
(defect image).</p>
      <p>The defect images described above were noisy. The absolute noise level P was set
by us as follows:
where bic is the elements of a clear image matrix and bin is the elements of the noisy
image matrix.</p>
      <p>Noise is additively mixed with a clean image. We used P  10, 30, 50, 70, 90
levels (2). Considering that the full image contains cells with a gray color gradation
from L  1 (white color) to L  5 (black color), the question arises about the
distribution of the distortion magnitude L  1, 2, 3, 4 by the number of distorted image
elements. We assume that this distribution is Gaussian with a zero mean value and a
standard deviation which is equal to 1. Then the probability density is</p>
      <p>M
P   bin  bic ,</p>
      <p>i1
f L  1
2
e
 (2L)2 .</p>
      <p>(2)
(3)
(4)
 , %</p>
      <p>P  f  L  1 f  L  2 f  L  3 f L  4 .</p>
      <p>The corresponding distribution of L by the number of distorted image elements
q is shown in Table 2 (3), (4).
The relative value of the noise intensity we calculated according to Table 2.
</p>
      <p>Lmax
 
 k ,  
1 100</p>
      <p> Li 2 , Lmax  5
100 1
(5)
The value of the normalizing factor k was determined based on the following. If in all
100 cells of the matrix the black color with L  5 (presence of a defect) is changed to
white with L  1 (defect-free material), then Li  4 and, in accordance with (5),
</p>
      <p>Lmax
100% error corresponding to   1 , we get k  5 / 4  1.25 . In the article, we used the
following restrictions: if the total value of the image intensities and the noise in this
cell of the matrix exceeds the maximum level Lmax  5 , then L  5 , if the summary
. Taking such a change in L in the entire matrix as a
value was below Lmin  1 , then L  1 was kept.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Defect image capture</title>
      <p>The outputs of the Elman neural network are connected to the inputs of a logical
matrix connected in turn to a combinational circuit, which has four outputs by the
number of classes of identifiable images. The functional scheme of the intelligent system
for defect identification is shown in Figure 1.
To estimate the area of defect, we propose the following methodology: the matrix
(10 10 elements) represents the control zone of an ultrasonic transducer operating in
shadow mode and "sounding", for example, the product wall. The full image of this
zone is captured by the sensor unit of the transducer.</p>
      <p>During physical scanning of the product surface by the transducer, a noisy image
of a defect in the form of a fuzzy dark spot may appear in the lateral part of the
matrix. By a local displacement of the transducer, this spot must be moved to the central
part of the matrix. The neural network allows you to significantly clear the image of
the defect from the noise during its movement to the center of the matrix and make
this image more distinct.</p>
      <p>In the center of the matrix, there are positioned defect images capture zones, as
shown in Figure 2. These zones correspond to images No.1 ( 2  2 elements), No.4
( 3 3 elements), No.10 ( 4  4 elements), and No.16 ( 5  5 elements), which are
targeted for the respected classes. The logic matrix has 100 inputs, corresponding to
the number of outputs of the neural network that perform the function of a replicator.
These inputs are numbered in accordance with the pair jk , where j denotes the row
number and k denotes the column number of the matrix, as shown in Figure 2.
The Boolean function FI , implemented at the same output of the logic matrix,
describes images of class I, to which images No. 1, 2, and 3 belong.</p>
      <p>FI  y44  y45  y54  y55  .</p>
      <p>Herewith FI  1 , if elements y44 and y45 have a level of L  5 (black color), which
corresponds to a logic-1 level, and at least one of the elements y54 or y55 has the
same level (6).</p>
      <p>The Boolean function FII describes images of class II, to which images No. 4, 5, 6,
7, 8, and 9 belong.</p>
      <p>FII  y44  y45  y54  y55  y56  y65 y64  y66  y46  .</p>
      <p>Herewith FII  1 , if all elements involved in conjunction have a level of L  5 , and
at least one of the corner elements of the target image (No.4), i.e. one of the
disjunctive members in brackets had the same level L  5 (7).</p>
      <p>The Boolean function FIII describes images of class III, to which images No.10,
11, 12, 13, 14, and 15 belong.</p>
      <p>FIII  y44  y45  y46  y54  y55  y56  y57  y64  y65  y66  y67  y75  y76  y47  y74  y77  . (8)
Disjunctive terms of expression (8) correspond to the corner elements of the target
image No. 10.
(6)
(7)</p>
      <p>The Boolean function FIV describes images of class IV, to which images No. 16 –
No. 25 belong.</p>
      <p>FIV  y44  y45  y46  y54  y55  y56  y57  y64  y65  y66  y67  y68  y75  y76  y77 
  y74  y84  y85  y87  y88  y78  y47  y48  y58 .
(9)
When forming the FI  FIV functions, we made assumptions about the optional
presence of all corner elements of the target images in the defect image capture zones.
This is consistent with control practice.</p>
      <p>The analysis of expressions (6) – (9) shows that the patterns described by them are
sequentially nested one into another, namely FI is included in FII , FII – in FIII , FIII
– in FIV . Thus, when fixing an image of a senior class, for example No. 16, logical
unit will be present not only at the output FIV of the logical matrix, but also at outputs
FIII , FII , and FI . The combinational circuit shown in Figure 3 helps to eliminate this
effect. The circuit uses inverters and four–input conjunctors and provides separate
fixation of each class of image. Herewith, the appearance of a logical unit at one of
the outputs DI  DIV of the combinational circuit indicates the capture of the defect
image of the corresponding class.
The proposed system of the defect area estimation allows monitoring both in the
manual scanning mode with indication of the defect magnitude by sound or light
signals and automating the testing process. In the latter case, when scanning the
product along parallel paths with a given step, it is easy to automatically register the area.</p>
    </sec>
    <sec id="sec-7">
      <title>Experiments and results</title>
      <p>To simulate the operation of a neural network, we developed the program functioning
in the Matlab environment. Network training was carried out sequentially in three
stages.
1. First, a clean image No. 1 located in the lower-left corner of the matrix is fed to the
network input. This image then moves sequentially along the rows of the matrix
within i  1 ... 9 , j  1 ... 9 . Here i and j are the horizontal and vertical coordinates
of the matrix, respectively. The specified scan coordinates describe the movement
of the lower-left cell of the image.
2. At the second stage, a clean image No.1 (  0 ) was again fed to the network input
and it was moved along the matrix as described in paragraph 1. Then, image No. 1,
additively mixed with noise corresponding to   10% , was fed to the network
input, and this noisy image was moved along the matrix. We repeated this process
sequentially for all remaining noise levels, up to   31% . It should be noted that
this scanning process was repeated 100 times for each value for the noise intensity.
3. After completing the second stage of training, in order to verify the absence of
retraining effect, we repeated paragraph 1.</p>
      <p>The network training process described above we fulfilled with each pattern from
the training set. Table 3 shows the data on the image classes used for training.</p>
      <p>Figure 4 shows a graph of the dependence of the training time, relative to one
pattern  on the area of the black spot S, reflecting the area of the defect. But at the
same time,  characterized training on clean images, without noise. Training on
noisy images is carried out after the training on clean images. And here the training
time relative to one pattern  is significantly less. So, for images of the first type,
1    0.045 . For images of the second, third and fourth types, we have:  2  0.027 ,
 3  0.032 , and  4  0.021 .
Thus, we can state that the network training time at the second stage, assigned to one
noisy image, is ten times less than the corresponding time spent at the first stage when
training the network at the clean image.</p>
      <p>The phenomenon of retraining the network was not observed.</p>
      <p>Network testing was carried out by moving images from the set described above
(No. 1 – No. 25) with noise levels from Table 2 along with the matrix. Testing results
showed the following.</p>
      <p>If testing was carried out after a full cycle of network training on patterns of one
class using then a noisy image with   31% from the same class, then the test error
did not exceed 3%. This result is valid for all four classes.</p>
      <p>When combining images of classes I and II, i.e. with the expansion of the defect
spot area from S  3 to S  9 , the testing error on noise patterns (   31% )
increased to 8%. When combining all the images of I-IV classes in a single training set,
the maximum testing error increased to 12%. And in the first and second cases, the
parameter β increased significantly, its value became comparable with α. The
reduction of the noise level during testing significantly decreases this error. So, with a
reduction in  from 31 to 27%, the test error decreases by 40-50%, from 31 to 22% 
by 60-70% and so on.</p>
      <p>The above results were obtained by using the Elman neural network as a replicator
with nin  nout  100 , nh  200 and the number of layers l  1 . An increase in the
number of neurons in the hidden layer to 400 on average by 25% reduces the testing
error, however, the network training time increased by 30%. A network with two
hidden layers, while reducing the test error by 20%, trained 4 times longer than a
network with one hidden layer. Training a network with three hidden layers took
14 times more time than the single-layer network we used.</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusion</title>
      <p>We investigated the Elman neural network as an identity map (replicator) for the task
of identifying and estimating the area of a delamination in composite materials in the
process of ultrasonic testing. Twenty-five images were used as training patterns. They
were divided into four classes, according to the size of their defect area. Deterministic
distortions caused by the nature of the testing object were introduced into the images
of each class. Each image was moved step by step on a square matrix consisting of
100 cells, which represents the registration area of the ultrasound transducer. Images
were additively mixed with white Gaussian noise, the intensity of which varied from
10 to 31%. At each scan point, the noise was superimposed 100 times.</p>
      <p>The process of training the network began with presenting it with the clean
(without any noise) patterns. In the process of training the network, it was found that the
training time with clean patterns, assigned to one pattern fixed in a given position
inside the matrix, increases with increasing the pattern area. Network training using
noisy patterns was carried out after training with clean patterns. In this case, the
training time, defined in the same way, turned out to be ten times shorter.</p>
      <p>Network testing has provided the following. If testing was carried out after a full
cycle of network training on patterns of one class using a test pattern with a noise
level of 31%, then the test error did not exceed 3% for any class. When combining
images of classes I and II, i.e. with the expansion of the defect spot area from S  3
to S  9 , the testing error increased to 8%. When combining the images of all four
classes in a single training set, i.e. with the expansion of the defect spot area from
S  3 to S  25 , the maximum testing error increased to 12%. The reduction of the
noise level during testing significantly decreases this error. So, with a reduction in 
from 31 to 27%, the test error decreases by 40-50%, from 31 to 22%  by 60-70% and
so on.</p>
      <p>These results were obtained by modeling the Elman’s neural network with the
number of input and output neurons nin  nout  100 and with the number of neurons
in one hidden layer nh  200 . This network structure turned out to be optimal by the
criterion of error testing/training time.</p>
      <p>To estimate the delamination area, four nested defect spot capture zones were
implemented in the central part of the matrix. For this purpose, a cascade-connected
logical matrix and the combinational circuit having four outputs by the number of
classes of identifiable images are connected to the output of the neural network.</p>
      <p>Estimation of the defect area serves as the basic for determining the residual
strength of a given unit of a product and making a decision on the possibility of its
further operation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Composite</given-names>
            <surname>Materials</surname>
          </string-name>
          Handbook -
          <volume>17</volume>
          : Polymer Matrix Composites, vol.
          <volume>1</volume>
          -
          <fpage>3</fpage>
          . SAE International (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Robinson</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>Signal processing by neural networks in the control of eye movement</article-title>
          .
          <source>Computational Neuroscience Symposium</source>
          , Indiana University - Purdue University at Indianapolis, pp.
          <fpage>73</fpage>
          -
          <lpage>78</lpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Naykin</surname>
            ,
            <given-names>S.: Neural</given-names>
          </string-name>
          <string-name>
            <surname>Networks</surname>
          </string-name>
          .
          <article-title>A Comprehensive Foundation, second edition</article-title>
          . Prentice Hall, New Jersey (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Khandetskyi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Antonyuk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Signal processing in defect detection using backpropagation neural networks</article-title>
          .
          <source>NDT&amp;E International</source>
          ,
          <volume>35</volume>
          ,
          <fpage>483</fpage>
          -
          <lpage>488</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Chawla</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panwar</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anand</surname>
            ,
            <given-names>G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhatia</surname>
            ,
            <given-names>M.P.S.</given-names>
          </string-name>
          :
          <article-title>Classification of computer generated images from photographic images using convolutional neural networks</article-title>
          .
          <source>International Journal of Computer and Informational Engineering</source>
          , vol.
          <volume>12</volume>
          , No.
          <volume>10</volume>
          , pp.
          <fpage>822</fpage>
          -
          <lpage>827</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Moreno-Garcia</surname>
            ,
            <given-names>C.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elyan</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jayne</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>New trends on digitalization of complex engineering drawings</article-title>
          .
          <source>Neural Computing and Applications</source>
          , June, pp.
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ablameyko</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uchida</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Recognition of engineering drawing entities: review of approaches</article-title>
          .
          <source>International Journal Image Graph</source>
          <volume>07</volume>
          (
          <issue>04</issue>
          ):
          <fpage>709</fpage>
          -
          <lpage>733</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Elman</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          :
          <article-title>Finding structure in time</article-title>
          .
          <source>Cognitive Science</source>
          , vol.
          <volume>14</volume>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>211</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Elman</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bates</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johnson</surname>
            ,
            <given-names>M.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karmiloff-Smith</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parisi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plunkett</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Rethinking Innateness: A Connectionist Perspective on Development</article-title>
          . Cambridge, MA: MIT Press (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ardalani-Farsa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zolfaghari</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Chaotic time series prediction with residual analysis method using hybrid Elman-NARH neural networks</article-title>
          .
          <source>Neurocomputing</source>
          , vol.
          <volume>73</volume>
          ,
          <string-name>
            <surname>Iss</surname>
          </string-name>
          .
          <fpage>13</fpage>
          -
          <lpage>15</lpage>
          , Aug., pp.
          <fpage>2540</fpage>
          -
          <lpage>2553</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeng</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>A modified Elman neural network with a new learning rate scheme</article-title>
          .
          <source>Neurocomputing</source>
          , vol.
          <volume>286</volume>
          ,
          <string-name>
            <surname>Iss</surname>
          </string-name>
          .
          <volume>19</volume>
          ,
          <string-name>
            <surname>April</surname>
          </string-name>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>18</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Koskela</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehtokangas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saarinen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaski</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Time series prediction with multilayer perceptron, FIR and Elman neural network</article-title>
          . Tampere university of technology, Electronics laboratory, FIN-
          <volume>33101</volume>
          , Tampere, Finland (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Sheela</surname>
            ,
            <given-names>K.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deepa</surname>
            ,
            <given-names>S.N.</given-names>
          </string-name>
          :
          <article-title>Review of methods to fix number of hidden neurons in neural networks</article-title>
          .
          <source>Mathematical Problems in Engineering</source>
          , vol.
          <volume>12</volume>
          ,
          <string-name>
            <surname>article</surname>
            <given-names>ID</given-names>
          </string-name>
          425740, pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Jinchuan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xinzhe</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Empirical analysis of optional hidden neurons in neural network modeling for stock prediction</article-title>
          .
          <source>In Proceedings of the Pacific-Asia Workshop on Computional Intelligence and Industrial Application</source>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>828</fpage>
          -
          <lpage>832</lpage>
          , Dec. (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kallan</surname>
          </string-name>
          , R.:
          <source>Main Conceptions of Neural Networks. Williams Publ</source>
          ., New York (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Trenn</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Multilayer perceptrons: approximation order and necessary number of hidden units</article-title>
          .
          <source>IEEE Transactions on Neural Networks</source>
          , vol.
          <volume>19</volume>
          , No 5, pp.
          <fpage>836</fpage>
          -
          <lpage>844</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Shibata</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ikeda</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Effect of number of hidden neurons on learning in large scale layered neural networks</article-title>
          .
          <source>In Proceedings of the ICCAS-SICE International Joint Conference (ICCAS-SICE'09)</source>
          , pp.
          <fpage>5008</fpage>
          -
          <lpage>5013</lpage>
          , Aug. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pukish</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolbusz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wilamowski</surname>
            ,
            <given-names>B.M.:</given-names>
          </string-name>
          <article-title>Selection of proper neural network sizes and architectures: a comparative study</article-title>
          .
          <source>IEEE Transactions on Industrial Informatics</source>
          , vol.
          <volume>8</volume>
          , No 2, pp.
          <fpage>228</fpage>
          -
          <lpage>240</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Gavin</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holder</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Graeme</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Input detection for neural network models in water resources applications, part 2</article-title>
          .
          <article-title>Case study: forecasting salinity in a river</article-title>
          .
          <source>J. Hydrol</source>
          <volume>301</volume>
          (
          <issue>1-4</issue>
          ):
          <fpage>93</fpage>
          -
          <lpage>107</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Devi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rani</surname>
            ,
            <given-names>B.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prakash</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Role of hidden neurons in an Elman recurrent neural network in classification of cavitation signals</article-title>
          .
          <source>International Journal of Computer Applications</source>
          , vol.
          <volume>37</volume>
          , no 7, pp.
          <fpage>9</fpage>
          -
          <lpage>13</lpage>
          , (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>