<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring Psychological Data by Integrating Explanatory and Predictive Approaches through Artificial Neural Networks: A Brief Overview of Current Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Monica Casella</string-name>
          <email>monica.casella@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pasquale Dolce</string-name>
          <email>pasquale.dolce@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raffaella Esposito</string-name>
          <email>raffaella.esposito22@studenti.unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Luongo</string-name>
          <email>maria.luongo@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Marocco</string-name>
          <email>davide.marocco@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicola Milano</string-name>
          <email>nicola.milano@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michela Ponticorvo</string-name>
          <email>michela.ponticorvo@unina.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberta Simeoli</string-name>
          <email>roberta.simeoli@unina.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Public Health, University of Naples Federico II</institution>
          ,
          <addr-line>Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>NAC “Orazio Miglino” Lab, Department of Humanistic Studies, University of Naples Federico II</institution>
          ,
          <addr-line>Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Neapolisanit S.R.L. Rehabilitation Center</institution>
          ,
          <addr-line>Ottaviano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Traditional psychometric data analysis techniques belong to an explanatory approach and often require several assumptions on data. In contrast, Machine Learning (ML) techniques belong to a predictive approach and necessitate mild assumptions on input data. These predictive techniques aim to identify data patterns and generate accurate predictions of output values based on input values of new observations. In recent years, several works proposed the integration of explanatory and predictive approaches. This paper provides an overview of the works carried out at the Natural and Artificial Cognition “Orazio Miglino” Lab, discussing various applications of Artificial Neural Networks in psychology. The discussed studies highlight the promising outcomes of integrating machine learning techniques into traditional psychometric data analysis. Specifically, due to their flexible assumptions on input data, their ability to handle different types of input data, and their ability to model complex and nonlinear relationships between variables, the integration of ML techniques could complement and enhance psychological data analysis.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Psychometric data analysis has traditionally
relied on explanatory modelling techniques,
which aim to test hypothesized relationships
between variables and study intercorrelations
using methods such as correlation or latent
variable analysis [1]. However, these techniques
assume that the relationships among variables are
linear and can be limited when this assumption is
violated. Several studies have indicated that
ignoring nonlinear relationships among variables
can lead to biased results and misleading
interpretations [2, 3].</p>
      <p>In contrast, Machine Learning (ML)
techniques belong to a predictive approach, where
researchers aim to identify patterns in data and
generate accurate predictions of output values
based on input values of new observations [4].
Unlike traditional methods, ML techniques
require milder assumptions on input data and can
handle nonlinear relationships among variables.</p>
      <p>Although explanatory modelling remains
the dominant approach, several studies have
proposed an integration of Machine Learning
techniques, such as Artificial Neural Networks, to
complement traditional psychometric methods
and improve research efficiency, facilitate model
performance evaluation, and increase
interpretability [5, 6]. In particular, Artificial
Neural Networks have been applied in various
areas of psychology, including the validation and
development of psychometric tests, the
development of short-form of existent
psychometric tests, and the identification of
disorders such as autism. The following section
provides an overview of these applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Applications 2.1. Artificial Neural Networks for Validating Psychometric Tests</title>
      <p>The study of Dolce et al. [7] presents a novel
procedure that integrates explanatory and
predictive modelling to develop new
psychometric questionnaires based on
psychological and neuroscientific theories. The
procedure involves a series of steps that select the
most predictive items while preserving the
factorial structure of the scale. This approach
combines both the explanatory power of the
theory and the predictive power of modern
computational techniques, such as exploratory
data analysis and artificial neural networks
(ANNs).</p>
      <p>By integrating these techniques, the study aims
to derive theoretical insights on the characteristics
of the items selected and their conformity with the
theoretical framework of reference. Additionally,
the procedure selects those items that are most
relevant in terms of prediction by considering
their relationship with the actual
psychopathological diagnosis. This approach
helps to construct a diagnostic tool that conforms
both to the theory and to the individual
characteristics of the population under study.</p>
      <p>The proposed procedure involves constructing
an ANN capable of predicting the diagnosis of a
group of subjects based on their responses to a
questionnaire. The most predictive items are then
automatically selected while preserving the
factorial structure of the scale. Results of the study
indicate that the machine learning procedure
selected a set of items that drastically improved
the prediction accuracy of the model. Specifically,
the selected set of 167 items resulted in a
prediction accuracy of 88.5%, compared to the
original set of 260 items, which resulted in a
prediction accuracy of 74.4%. Moreover, the
procedure reduced the redundancy of the items
and eliminated those with less consistency.
Overall, the procedure presents a promising
approach for developing psychometric
questionnaires that both conform to theory and
accurately predict psychopathological diagnoses.
2.2. Artificial Neural Networks for
Dimensionality Reduction and
ShortForm Development of Psychometric
Tests</p>
      <p>Determining the number of dimensions in a
dataset requires researchers to make several
important decisions: in particular, the choice of
dimensionality extraction method and the
decision of the number of dimensions to retain are
considered among the most critical in the
development of psychometric tests [8].</p>
      <p>One of the most widely used statistical
techniques for dimensionality reduction in test
construction is principal component analysis
(PCA) [9]. However, the assumptions underlying
PCA, in particular the linearity of the relationships
between variables, are not always verifiable in the
psychological domain. Therefore, the principal
component analysis may not always be the most
appropriate analysis method. An interesting
alternative to PCA is the autoencoder.</p>
      <p>Autoencoders were first introduced by LeCun
in 1987 [10] and developed by Baldi and Hornik
in 1989 [11].</p>
      <p>They are a particular type of artificial neural
network with the same number of outputs and
inputs and a smaller number of hidden units,
trained to learn the identity function, such that the
output is as similar as possible to the input. Perfect
reconstruction of the input vectors is not possible,
since the central layer is smaller than the others.
Because of this, it is in this layer that the most
relevant information for input reconstruction is
encoded.</p>
      <p>In its simplest form, the weights in the network
are trained to minimise the squared reconstruction
error and because of this learning strategy, it can
be demonstrated that a linear autoencoder, with n
features, converges to the n-th dimensional PCA
subspace [12]. However, there are some important
differences among these methods: PCA is a linear
transformation and assumes the normality of
observed data; autoencoders can deal with
nonlinear relations [13] and make no assumptions
about data distribution. On the other hand, the
principal components are orthogonal and sorted in
order of decreasing variability. So, the
autoencoder is a more flexible and powerful
method because it can learn more complex
relations present in data but, unlike PCA, is less
interpretable.</p>
      <p>A study by Pham et al. [14], proposes a
PCAautoencoder for image recognition. This
PCAautoencoder has independent latent space and
ordered internal nodes, so it makes explicit PCA’s
main characteristics. When used with linear
activation only and one hidden layer, the
PCAautoencoder proposed in this study is totally
equivalent to PCA. [15]</p>
      <p>In a recent study by Casella et al. [16],
autoencoders have been applied to the
development of a short-form of psychological
tests. In fact, despite the potential advantages of
using shorter tests, their development shows
several limitations: firstly, it is a relatively
laborious process, and secondly, researchers
usually consider only a small part of the possible
short-forms of a test. In this context, machine
learning techniques [17, 18] and, in this case,
neural networks, can help automate and optimize
this process.</p>
      <p>In particular, the results of this work show that
an autoencoder trained on an existing long-form
of psychometric test could be useful to select a
short-form that best reconstructs the original test
responses, among the many possible alternative
short-forms. Furthermore, maintaining the
number of neurons in the internal layer equal to
the number of dimensions of the original test help
in choosing short-forms that preserve the original
test's dimensionality.
2.3. Artificial Neural Networks to
Identify Children with Autism
Spectrum Disorder</p>
      <p>Another recent study by Simeoli et al. [19]
suggests that motor abnormalities can provide a
computational marker for autism, which could
lead to more objective and efficient assessment
and diagnosis processes. The researchers used a
software tool on a smart tablet device to capture
detailed information about children's motor
patterns, comparing the movement trajectories of
autistic children and typically developing children
during a cognitive task.</p>
      <p>Machine Learning analysis of the motor
patterns identified autism with 93% accuracy,
indicating that the disorder can be
computationally identified.</p>
      <p>In particular, researchers use an Artificial
Neural Network (ANN) to recognize autism
motor signatures by processing complex and
nonlinear relationships between variables. In
particular, a feedforward multilayer perceptron
was used to classify individuals with autism
spectrum disorder and typical development. This
research highlights the potential of technology in
enhancing assessment and diagnosis processes for
autism, as well as providing a new starting point
for rehabilitation treatments.
3. Conclusion and Future Research</p>
      <p>This paper discusses the use of machine
learning techniques, specifically artificial neural
networks (ANNs), in psychometric data analysis.
While traditional psychometric methods rely on
explanatory modelling, machine learning
techniques offer a complementary approach
where researchers aim to identify patterns in data
and generate accurate predictions of output values
based on input values of new observations.</p>
      <p>Future research in this area will explore more
powerful autoencoders, specifically Variational
Autoencoders (VAE) [20], which encourage the
latent space to follow a predefined distribution
(e.g., normal distribution). VAE maps similar
inputs closer into the latent space, allowing for
greater interpretability of latent space and more
direct comparison to latent variable analysis
techniques such as Factor Analysis [21, 22].
Thanks to its characteristics, this neural network
combines the predictive power typical of ML
techniques, but also the interpretability of the
explanatory approach making the integration of
these approaches unified in a single method.</p>
      <p>In particular, future research will investigate
more in-depth the similarities and differences
between VAE and other famous dimensionality
reduction techniques such as Factor Analysis.</p>
      <p>Furthermore, VAE will be applied to analyze
movements and classify individuals with autism
spectrum disorder and typical development.</p>
      <p>One of the key advantages of Machine
Learning techniques is their ability to generate
accurate predictions based on new data inputs.
This is particularly important in psychology,
where accurate prediction of outcomes is critical
for making informed decisions and guiding
effective interventions. By using Machine
Learning algorithms to identify patterns in data,
researchers can develop more accurate models of
human behaviour and cognition, which can help
guide the development of new therapies and
interventions.</p>
      <p>Another advantage of Machine Learning
techniques is their ability to handle large datasets
and complex relationships among variables. This
is particularly useful in fields like psychology,
where datasets are often complex and difficult to
analyze using traditional statistical techniques. By
using Machine Learning algorithms to identify
patterns in data, researchers can gain new insights
into the underlying mechanisms of human
behaviour and cognition, which can help guide the
development of new theories and models.</p>
      <p>As discussed in this contribution, Artificial
Neural Networks, in particular, have shown great
promise in a variety of psychological applications,
from the development and validation of
psychometric tests to the identification of
neurological disorders such as autism. By using
these powerful algorithms, researchers can gain
new insights into the underlying mechanisms of
these disorders, leading to more effective
therapies and interventions.</p>
      <p>In summary, the integration of Machine
Learning techniques into traditional psychometric
data analysis could provide new ways to analyze
and interpret data, enabling the integration of
explanatory and predictive modelling.
[1] L. Breiman. 2001. Statistical Modeling: The
Two Cultures (with comments and a
rejoinder by the author). Statistical Science
16, 3 (August 2001), 199–231.</p>
      <p>DOI:https://doi.org/10.1214/ss/1009213726
[2] D. J. Bauer. 2005. The Role of Nonlinear
Factor-to-Indicator Relationships in Tests of
Measurement Equivalence. Psychological
Methods 10, (2005), 305–316.
DOI:https://doi.org/10.1037/1082989X.10.3.305
[3] W. C. M. Belzak and D. J. Bauer. 2019.</p>
      <p>Interaction effects may actually be nonlinear
effects in disguise: A review of the problem
and potential solutions. Addictive Behaviors
94, (July 2019), 99–108.
DOI:https://doi.org/10.1016/j.addbeh.2018.
09.018
[4] G. Shmueli. 2010. To Explain or to Predict?
Statistical Science 25, 3 (August 2010), 289–
neural networks. AIChE Journal 37, 2
(1991),233–243.</p>
      <p>DOI:https://doi.org/10.1002/aic.690370209
[14] C. Pham, S. Ladjal, and A. Newson. 2022.</p>
      <p>PCA-AE: Principal Component Analysis
Autoencoder for Organising the Latent Space
of Generative Networks. J Math Imaging Vis
64, 5 (June 2022),569–585.
DOI:https://doi.org/10.1007/s10851-02201077-z
[15] M. Casella, P. Dolce, M. Ponticorvo, and D.</p>
      <p>Marocco. 2022. From Principal Component
Analysis to Autoencoders: a comparison on
simulated data from psychometric models. In
2022 IEEE International Conference on
Metrology for Extended Reality, Artificial
Intelligence and Neural Engineering
(MetroXRAINE), 377–381.
DOI:https://doi.org/10.1109/MetroXRAINE
54828.2022.9967686
[16] M. Casella, P. Dolce, M. Ponticorvo, N.</p>
      <p>Milano, and Davide Marocco. In press.
Artificial Neural Networks for Short-form
Development of Psychometric Tests: A
Study on Synthetic Populations Using
Autoencoders. Journal of Educational and
Psychological Measurement
[17] O. Gonzalez. 2021. Psychometric and
Machine Learning Approaches to Reduce the
Length of Scales. Multivariate Behavioral
Research 56, 6 (November 2021), 903–919.
DOI:https://doi.org/10.1080/00273171.2020
.1781585
[18] T. Yarkoni. 2010. The abbreviation of
personality, or how to measure 200
personality scales with 200 items. Journal of
Research in Personality 44, 2 (April 2010),
180–198.</p>
      <p>DOI:https://doi.org/10.1016/j.jrp.2010.01.00
2
[19] R. Simeoli, N. Milano, A. Rega, and D.</p>
      <p>
        <xref ref-type="bibr" rid="ref15">Marocco. 2021</xref>
        . Using Technology to
Identify Children With Autism Through
Motor Abnormalities. Frontiers in
Psychology 12, (2021). Retrieved March 27,
2023 from
https://www.frontiersin.org/articles/10.3389
/fpsyg.2021.635696
[20] D. P. Kingma and Max Welling. 2019. An
Introduction to Variational Autoencoders.
MAL 12, 4 (November 2019), 307–392.
      </p>
      <p>DOI:https://doi.org/10.1561/2200000056
[21] Y. Huang and J. Zhang. 2022. Exploring
Factor Structures Using Variational
Autoencoder in Personality Research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          310. DOI:https://doi.org/10.1214/
          <fpage>10</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <issue>STS330</issue>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yarkoni</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Westfall</surname>
          </string-name>
          .
          <year>2017</year>
          . Choosing
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>Psychol Sci</source>
          <volume>12</volume>
          ,
          <issue>6</issue>
          (November
          <year>2017</year>
          ),
          <fpage>1100</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>DOI:https://doi.org/10.1177/174569161769</mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <volume>3393</volume>
          [6]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Watts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Athey</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Vazire</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Vespignani</surname>
            , and
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Yarkoni</surname>
          </string-name>
          .
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>computational social science</article-title>
          .
          <source>Nature</source>
          <volume>595</volume>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <volume>7866</volume>
          (
          <year>July 2021</year>
          ),
          <fpage>181</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>DOI:https://doi.org/10.1038/s41586-021-</mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          03659-
          <fpage>0</fpage>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dolce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marocco</surname>
          </string-name>
          , M. Nelson Maldonato,
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>and R.</given-names>
            <surname>Sperandeo</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Toward a Machine</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          11, (
          <year>2020</year>
          ).
          <source>Retrieved March</source>
          <volume>27</volume>
          ,
          <year>2023</year>
          from
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>https://www.frontiersin.org/articles/10.3389</mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          /fpsyg.
          <year>2020</year>
          .
          <volume>00446</volume>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Casella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dolce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ponticorvo</surname>
          </string-name>
          , and
          <string-name>
            <surname>D.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Marocco</surname>
          </string-name>
          .
          <year>2021</year>
          .
          <article-title>Autoencoders as an</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <article-title>psychometric models</article-title>
          . In PSYCHOBIT. [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hotelling</surname>
          </string-name>
          .
          <year>1933</year>
          .
          <article-title>Analysis of a complex of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Psychology</surname>
            <given-names>24</given-names>
          </string-name>
          , (
          <year>1933</year>
          ),
          <fpage>417</fpage>
          -
          <lpage>441</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          DOI:https://doi.org/10.1037/h0071325 [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          .
          <year>1987</year>
          . Connexionist learning
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>Marie</given-names>
            <surname>Curie</surname>
          </string-name>
          (Paris) [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Baldi</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Hornik</surname>
          </string-name>
          .
          <year>1989</year>
          . Neural
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          minima.
          <source>Neural Networks</source>
          <volume>2</volume>
          ,
          <issue>1</issue>
          (
          <year>1989</year>
          ),
          <fpage>53</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          58. DOI:https://doi.org/10.1016/
          <fpage>0893</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <volume>6080</volume>
          (
          <issue>89</issue>
          )
          <fpage>90014</fpage>
          -
          <lpage>2</lpage>
          [12]
          <string-name>
            <surname>C. M. Bishop</surname>
            and
            <given-names>N. M.</given-names>
          </string-name>
          <string-name>
            <surname>Nasrabadi</surname>
          </string-name>
          .
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          Springer. [13]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Kramer</surname>
          </string-name>
          .
          <year>1991</year>
          . Nonlinear principal
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>Frontiers in Psychology 13</source>
          , (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>Retrieved March 27</source>
          ,
          <year>2023</year>
          from
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>https://www.frontiersin.org/articles/10.3389</mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          /fpsyg.
          <year>2022</year>
          .
          <volume>863926</volume>
          [22]
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Urban</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Bauer</surname>
          </string-name>
          .
          <year>2021</year>
          . A Deep
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>Psychometrika 86, 1 (March</source>
          <year>2021</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>DOI:https://doi.org/10.1007/s11336-021-</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>