<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CEUR Workshop Proceedings</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.18287/1613</article-id>
      <title-group>
        <article-title>INTELLIGENT PSYCHO-DIAGNOSTIC FORECASTING SYSTEM</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A.N. Danilenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>Samara</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <volume>1638</volume>
      <fpage>527</fpage>
      <lpage>535</lpage>
      <abstract>
        <p>In this article we have considered a problem of psychotherapy efficiency of. We offer a method of assessing and forecasting efficiency of therapeutic influence. The principle of creating intelligence diagnostic forecasting system is described.</p>
      </abstract>
      <kwd-group>
        <kwd>decision support systems</kwd>
        <kwd>neural networks</kwd>
        <kwd>fuzzy logic</kwd>
        <kwd>efficiency of psychotherapy</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The main objective of any psychotherapeutic treatment lies in helping patients to
introduce necessary changes to their lives. Success or efficiency of psychotherapy is
assessed depending on how resistant and, in a broader sense - salutary; these changes
are for the patient. Nowadays psychotherapeutic measures providing long-lasting
positive effect are considered to be the most suitable. In other words, overall
performance of a psychotherapist is evaluated by a result which cannot be subjected to
reliable forecasting.</p>
      <p>The problem of psychotherapeutic forecast efficiency at the initial stage of treatment
has not been solved and the choice of a method which would be adequate to a
patient’s condition and their personal conditions is carried out by psychotherapists
intuitively or on the basis of their professional experience. Therefore, this aspect of
psychotherapeutic treatment has a subjective character while the majority of
psychotherapeutic models do not even consider measuring dynamics of the patient’s condition
and assessing treatment efficiency.</p>
      <p>Method of efficiency predictive assessment of
psychotherapeutic influence
We believe that efficiency predictive assessing of psychotherapeutic influence can be
based on a series of measurements conducted for indicators of mental and, naturally,
emotional condition of the patient. If we exclude diagnostics methods of a functional
state by removing psycho-physiological indicators, e.g., questionnaires, aimed at
selfassessment of the mental state of examinees we will arrive at the most widespread
method of forecasting.</p>
      <p>One of the diagnostic methods which would meet the requirements of serial indicators
measurement is the color test by M. Lyusher. This method has been chosen by us
because it is not bound to lexical or graphic questions and terms which result in the
so-called "effect of learning" at repeated tests and seriously distort the results of
testing. Long-term applying Lyusher’s test (the essence of which is detecting examinee’s
choice of the preferred and non-preferred colors out of the eight shown colors)
demonstrates that color preferences depend on a set of basic personal characteristics
as well as on the actual state caused by a particular situation.</p>
      <p>The offered method of assessment and forecast of the examinee’s condition is based
on calculating the described criteria. Since the testing procedure is brief and simple
and its contents bear unconscious character, it can be repeated several times, with
short time intervals without the loss of results reliability. It provides the opportunity
of forecasting patient’s condition dynamics by means of trends creating methods.
3</p>
      <p>Intelligent emotional condition dynamic forecasting system
In order to forecast the change in the examinee’s emotional state of we apply the
indistinct neural production network by Vanga-Mendel. Let us examine the forecast
task in more details in the example below.</p>
      <p>Throughout the psychotherapeutic course the patient takes tests with a certain
frequency which corresponds to the frequency of carrying out psychotherapeutic
sessions (usually not more often than once a week). As a result, the data file with
indicators of this patient’s emotional condition is being collected. By carrying out each test
we will receive the value of seven indicators of an emotional state. These indicators
form a complex assessment of an emotional state. For the purpose of maximum
saving of the examinee’s information we have decided to consider these indicators
independently without carrying out a regression generalization. In this regard, the forecast
for each indicator becomes irrespective of its own neural network. The forecast of one
of the indicators will be described further in this paper.</p>
      <p>For the actual forecast to be successful it is necessary to know the values of at least
the same number of test results, as the number of the accesses into the neural network.
The values of the latest tests are used for the forecast below. While implementing the
forecast, the predicted values are used instead of the missing values. Therefore, the
process of forecasting becomes interactive.
For implementing the forecast function in indistinct neural networks we have applied
an indistinct neural production network by Vanga-Mendel. It implements the
indistinct production model based on rules of the following type:
If x1 there is Ai1 &amp; &amp; x j there is Aij &amp; &amp; xm there is Aim ,</p>
      <p>THEN y there is Bi , i 1,, n
Moreover, the algorithm of an indistinct conclusion which is carried out by this
indistinct neural production network is based on the following provisions:
─ The accessory functions of all the indistinct sets are represented by a Gauss
function;
─ Indistinct implication - indistinct work;
─ T-norm - indistinct work;
─ Accumulation of the active conclusions of rules is not carried out;
─ The defusification method - the average center.</p>
      <p>Thus, the indistinct production model and the mechanism of an indistinct conclusion
for this indistinct neural production network can be displayed by the following
expression:
As accumulation of rules is not carried out and the method of a defusification is the
method of the average center, a defusified value is determined by the output change in
the formula:</p>
      <p>m
 B'i ( y)   Bi ( y) j1  Aij (x' j )</p>
      <p>
        n  m 
y'  i1  arg mayx  B'i y j1  Aij x' j 
n m
i1 j1  Aij x' j 
In our case the accessory functions of all indistinct sets are represented by the Gauss
functions. Therefore, expression (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) is put in the following way:
y'
n    y  ci   m exp  x' j aij 2  
arg maxexp
i1    di   j1   bij   
 y 
 exp  x' j aij 2 
n m
i1 j1   bij  
Where ci and di respectively are the centers and the width of the Gaussian functions
representing the function of accessory indistinct sets Bi of the conclusions of rules;
ai and bi - respectively are the centers and the width of the Gaussian functions
representing the function of accessory indistinct sets Ai of the prerequisites of rules.
In the final equation we receive:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
y'
n  m   x' j aij 2 
      </p>
      <p>
        
c   exp 
i1  i j1   bij  
 y 
n m   x' j aij 2 
 exp 
i1 j1   bij  
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
Expression (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) fully describes the procedure of an indistinct conclusion proceeding
from the above provisions. In figure 1 we have represented a multi-layered structure
of an indistinct network which layers elements realize the corresponding components
of this expression.
      </p>
      <p>The structure of this indistinct network consists of four layers.</p>
      <p>Layer 1 carries out a fusification of the variables x' j ( j  1,, n) . Elements of this
layer calculate values of the accessory functions  Aij (x' j ) set by Gaussian functions
with parameters a ij and bij .</p>
      <p>Layer 2 in which the number of the elements is equal to the number of rules in the
base carries out the aggregation of degrees of prerequisites validity of the
corresponding rules.</p>
      <p>In layer 3 the first element serves as a layer for activation of the conclusions of rules,
ci according to the values of the degrees of the prerequisites of rules aggregated in the
previous layer. The second element of the layer carries out auxiliary calculations for
the subsequent defusification of the result.</p>
      <p>Layer 4 which consisting of one element carries out a defusification of an output
variable.</p>
      <p>Parametrical layers of a network are the first and the third layers where the adjusted
parameters are aij , bij and ci .</p>
      <p>The indistinct neural production network of Vanga-Mendel has a multi-layered
structure with direct distribution of a signal, the ouput value which can be changed, by
correcting parameters of the elements of the layers that allow using the algorithm of
the return distribution of a mistake for training the network.</p>
      <p>In the set training selection (x1(k) , x2(k) ,, xm(k) , y(k) ), k  1,, K , x1(k) , x2(k) , x(k)
m are the
values of accessing variables x1 , x2 ,, xm , y (k ) is the reference value of an output
variable y in k .</p>
      <p>We will consider procedure of the correction of values, ci , aij and bij .
Stage 1. For each example of the training selection of values of input variables
(x1(k) , x2(k) ,, xm(k) , y(k) ), k  1,, K values of an output variable, y'(k ) , are calculated.
Stage 2. Function of a mistake for all examples of the training selection is calculated:
E (k)  1 y'(k)  y (k) 2 , k  1,, K
2</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
Stage 3. Values ci , aij and bij are corrected for every i provided that every
example k of the training selection, proceeds from the ratios:
E 
1 K y'(k )  y(k ) 2  
      </p>
      <p>K k 1
In the case of a successful performance of a condition the network is considered to be
correctly trained. Otherwise, transition to stage 3 is carried out and adjustment
repeats.</p>
      <p>Functionality of the automated system of the forecast of dynamics of an emotional
condition of the personality includes the indicators forecasting function of an
emotional condition of the person. To comprehend this function an indistinct
neuralproduction network by Vanga-Mendel is used.</p>
      <p>Effective functioning of a neural network requires an empirical selection of some
parameters, defining structure and networks, and parameters of the algorithm of its
training. The forecast window (i.e. a number of accesses into a neural network) is
defined in a system development task. The parameters of a neural network available
for control are a training coefficient.</p>
      <p>To check the working capacity and the selection of necessary parameters the function
below is used:
f(x)  0.1 sin(0.02 x) + 0.2  sin(0.03 x)  0.6  sin(0.9 x) 
 1.11 sin(0.19 x) + 2.3 sin(0.37 x)
Its schedule and results of the forecast for a neural network are represented in figure
1.
ci (t  1)  ci (t)  
aij (t  1)  aij (t)  
bij(t  1)  bij(t)  </p>
      <p>  x'(jk) aij 2 
( y'(k)  y (k) )m exp </p>
      <p>j1   bij  
n m   x'(jk) aij 2 
i1 j1 exp  bij  </p>
      <p>  x'(jk) aij 2 
2  x'(jk) aij  y'(k)  y(k)  ci  y'(k)  m exp </p>
      <p>j1   b  
  x'(jk) aij 2 
bi2j  in1 jm1 exp  bij  </p>
      <p>m   x'(jk) aij 2 
2  x'(jk) aij 2  y'(k)  y(k)  ci  y'(k)  exp </p>
      <p>
        j1   b  
b2  n m   x'(jk) aij 2 
ij i1 j1 exp  bij  
Where t is the number of training iterations;  0,1is the training speed coefficient.
Stage 4. Values of an output variable for each example of selection y '(k ) are
repeatedly calculated and the average total error, taking into account all examples of the
training selection, is calculated and compared to some established threshold:
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
(
        <xref ref-type="bibr" rid="ref10">10</xref>
        )
In the function highlighted in blue, on the basis of which training was made, the red
color corresponds to the values predicted by a neural network after training.
Highlighted green portion of the area of the function has also been used for training. Other
area has been used as a test selection for determining the forecast quality of the
unknown data. Training was made before achieving the average total error training
(formula 4) of 0.01.
For the selection of the training coefficient value, a study of dependence of the
number of iterations to achieve an error of training units (training speed) and errors on test
selection from the training coefficient value has been conducted at 0.01. Results are
given in table 1.
Training coefficient Number of iterations Error
0.1 319 0.0126977
0.2 154 0.0115617
0.3 113 0.0122632
0.4 87 0.0126165
0.5 73 0.0120938
0.6 63 0.0117784
0.7 61 0.0116801
0.8 60 0.0134587
0.9 56 0.0122873
The graph on which the value of an error on test has reached 10,000 for descriptive
reasons is given in figure 2.
      </p>
      <p>It is evident from the graph that the number of iterations considerably decreases as the
coefficient value of training increases. Thus, the error slightly increases at great
coefficient values of training. Also if the values of the training coefficient are too large the
instability of work of the algorithm of training is possible. Therefore, it is expedient to
choose coefficient of training equal 0.7.
For studying the quality of the forecast, research of dependence of an error upon test
selection from number of steps of the forecast has been conducted. Results are
presented in table 2.
For a control variable tests to be taken by several people were held to collect data for
training a neural network. The structure of a control variable included taking the test,
viewing the results of the test and analyzing the graph of change of indicators of an
emotional state with the forecast.</p>
      <p>The testee is offered to choose the color which is mostly preferred by them by
pressing on the desired selection. After the choice is made, the chosen color is removed
from the choice panel. After all colors are eslected, a message on a successful
completion of the test is demonstrated to the testee.
In figure 4 the graph displays the results of choices one after another, a standard color,
a code for a standard color, the corresponding tags, stress and alarm tags. In a separate
field, the psychologist can leave comments to this test in the form of a saved text.
In order to view the graph of dynamics of emotional indicators, it is necessary to press
‘Graph’ button. Then there will pop-up a corresponding window (figure 5). In the left
part of the graph the log of changes of the chosen indicator is represented. In the right
part, green color indicates the forecast of change of this indicator for three steps.
For passing a control test, basic training of a neural network on the given tests of the
three testees has been provided. The total number of tests was 19, 16 of them had
been taken by other testees in the training selection, 3 – in the test mode.</p>
    </sec>
    <sec id="sec-2">
      <title>Conclusion</title>
      <p>The proposed method shows the real possibility of designing an intelligent system to
estimate and to predict the psychotherapeutic influence at early stages of interaction
between a psychotherapist and a patient. The method based on the classic
psychodiagnostic procedures is easy to use by the patients and therapists.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Wallerstein</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robbins</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sargent</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luborsky L</surname>
          </string-name>
          .
          <article-title>The psychotherapy Research Project of the Menninger Foundation: Rationale, Method</article-title>
          and
          <string-name>
            <given-names>Sample</given-names>
            <surname>Use</surname>
          </string-name>
          .
          <source>First Report. Bull. of Menninger Clin.</source>
          ,
          <year>1956</year>
          ;
          <volume>20</volume>
          :
          <fpage>221</fpage>
          -
          <lpage>278</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Nemchin</surname>
            <given-names>TA</given-names>
          </string-name>
          .
          <article-title>Conditions of psychological tension</article-title>
          . L.,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Hofler</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gloster</surname>
            <given-names>AT</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoyer</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Causal effects in psychotherapy: Counterfactuals counteract overgeneralization</article-title>
          .
          <source>Psychotherapy Research</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kazdin</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Methodology, design, and evaluation in psychotherapy research. Handbook of psychotherapy and behavioral change</article-title>
          . 4th ed.,
          <year>1994</year>
          :
          <fpage>19</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Spence</surname>
            <given-names>D.</given-names>
          </string-name>
          <article-title>Traditional case studies and prescriptions for improving them. Psychodynamic treatment research. A handbook for clinical practice</article-title>
          . Wiley,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Luborsky</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crits-Christoph</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mintz</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auerbach</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Who Will Benefit From Psychotherapy</article-title>
          . N.Y.: Basic Books,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Spirtes</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glymour</surname>
            <given-names>CN</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scheines</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Causation</surname>
          </string-name>
          , Prediction, and Search. Springer-Verlag, New York,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Zinchenko</surname>
            <given-names>EA</given-names>
          </string-name>
          .
          <article-title>Metod of expert visual definition of working emotional conditions on production</article-title>
          ,
          <year>1983</year>
          :
          <fpage>59</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Karmanov</surname>
            <given-names>AA</given-names>
          </string-name>
          .
          <article-title>Methodology of diagnostics of key parameters of a mental state in Lyusher's test</article-title>
          . URL: http://koob.ru.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Prokhorov</surname>
            <given-names>AO</given-names>
          </string-name>
          .
          <article-title>Mental states and their manifestations in educational process</article-title>
          .
          <source>Kazan</source>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Sobchik</surname>
            <given-names>LN</given-names>
          </string-name>
          .
          <article-title>Metod of color elections - modification of eight-color test of Lyusher. Practical guidance</article-title>
          .
          <source>SPb.: Speech</source>
          ,
          <year>2010</year>
          ; 128 p.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Manski</surname>
            <given-names>C</given-names>
          </string-name>
          .
          <article-title>Identification for Prediction and Decision</article-title>
          . Harvard University Press, Cambridge, Massachusetts,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Bareinboim</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brito</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearl</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Local characterizations of causal bayesian networks</article-title>
          .
          <source>Lecture Notes in Artificial Intelligence</source>
          ,
          <year>2012</year>
          ;
          <volume>7205</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Pearl</surname>
            <given-names>J.</given-names>
          </string-name>
          <article-title>The deductive approach to causal inference</article-title>
          .
          <source>Journal of Causal Inference</source>
          ,
          <year>2014</year>
          ;
          <volume>2</volume>
          (
          <issue>2</issue>
          ):
          <fpage>115</fpage>
          -
          <lpage>130</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Osovsky</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Neural networks for information processing</article-title>
          .
          <source>Finance and Statistics</source>
          ,
          <year>2002</year>
          ; 344 p.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Borisov</surname>
            <given-names>VV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kruglov</surname>
            <given-names>VV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fedulov</surname>
            <given-names>AS</given-names>
          </string-name>
          .
          <article-title>Indistinct models and networks</article-title>
          .
          <source>The hot line - Telecom</source>
          ,
          <year>2007</year>
          ; 284 p.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Didelez</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kreiner</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keiding</surname>
            <given-names>N.</given-names>
          </string-name>
          <article-title>Graphical models for inference under outcomedependent sampling</article-title>
          .
          <source>Statistical Science</source>
          ,
          <year>2010</year>
          ;
          <volume>25</volume>
          (
          <issue>3</issue>
          ):
          <fpage>368</fpage>
          -
          <lpage>387</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Bareinboim</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearl</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>A general algorithm for deciding transportability of experimental results</article-title>
          .
          <source>Journal of Causal Inference</source>
          ,
          <year>2013</year>
          ;
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <fpage>107</fpage>
          -
          <lpage>134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Bareinboim</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearl</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Controlling selection bias in causal inference</article-title>
          .
          <source>In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS)</source>
          ,
          <source>JMLR, April 21-23</source>
          ,
          <year>2012</year>
          :
          <fpage>100</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>