<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Research in Nursing 12 (2007)
461-469. URL: http://dx.doi.org/10.1177/1744987107079616. doi:10.1177/1744987107079616.
[25] A. Esposito</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1007/978-3-031-41264-6_</article-id>
      <title-group>
        <article-title>Recognition to Power Adaptability for More Efective Speech Therapies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Miriana Calvano</string-name>
          <email>miriana.calvano@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Curci</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Esposito</string-name>
          <email>andrea.esposito@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rosa Lanzilotti</string-name>
          <email>rosa.lanzilotti@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Piccinno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alfonso Pio Pretorino</string-name>
          <email>a.pretorino@studenti.uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Bari Aldo Moro</institution>
          ,
          <addr-line>Via E. Orabona 4, 70125, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of Pisa</institution>
          ,
          <addr-line>Largo B. Pontecorvo 3, 56127, Pisa</addr-line>
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>58</volume>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Speech therapy is a medical area focused on diagnosing and treating speech impairments, which afect an individual's ability to communicate efectively and develop linguistic skills. While these dificulties can arise at any stage of life, they are most commonly observed in childhood. In this context, technology plays a crucial role in supporting therapists while also enhancing patient engagement, reducing boredom, and minimizing frustration during treatment. To address this challenge, the article explores the integration of AI-driven emotion recognition techniques in a web platform called e-SpeechT that supports the actors involved in speech therapy (i.e., therapist, caregiver and patient) when creating, managing and performing it. This research work proposes new functionalities that can be implemented to improve the efectiveness of the treatment while making the system more adaptable to patients' needs, skills and emotional states, fostering a seamless human-AI symbiosis. The main objective of e-SpeechT is to ensure a more sustainable usage and development of resources while providing an easier access to the treatment.</p>
      </abstract>
      <kwd-group>
        <kwd>Speech disorders</kwd>
        <kwd>Emotion recognition</kwd>
        <kwd>Symbiosis</kwd>
        <kwd>User Engagement</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The use of Artificial Intelligence (AI) is increasingly spreading in every aspect of society. AI can be
integrated into many fields, modifying and enhancing the interaction process, especially in critical
ifelds like medicine, where it can be employed to aid diagnosing illnesses, and to monitor and assess
therapy’s progress [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Integrating technology in therapies allows therapists to continuously manage
the treatments while allowing patients to perform exercises at home [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In addition, it avoids wasting
resources in traveling to attend physical appointments at the physician’s ofice, while benefiting
from the advantages and guidance of technology and reducing the environmental impact [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. This
research work focuses on speech therapy, which is a field of medicine that aims at treating impairments
concerning linguistic abilities (e.g., speech, language, cognitive-communication) [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]
e-Health has been changing how professionals and patients carry out their activities, enabling remote
treatments, monitoring progress more easily, and minimizing stress levels [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Integrating AI in this
context can revolutionize e-health, elevating it to new heights. Regarding speech therapy, AI-based
functionalities can be used for automatically correcting exercises or quicker diagnoses [9]. However, it
is essential that AI systems comply with the legal and ethical requirements that delineate the standard to
follow to safeguard human rights while promoting sustainability when creating AI systems [10]. More
specifically, the AI Act represents the main reference point for building human-centric and compliant
systems that fulfill these objectives.
      </p>
      <p>This work revolves around a web-application, called e-speechT, which aims to support speech therapies
by reducing the cognitive demand of tasks and improving its eficacy for professionals, patients—children
(A. P. Pretorino)</p>
      <p>CEUR</p>
      <p>ceur-ws.org
from 4 to 8 years old—, and caregivers [11]. Although the system was developed in previous research
works, we propose a prototype of a new AI-based component in its architecture to overcome the
challenges that concern catching children’s attention while carrying out exercises for their treatment.
The higher objective of e-SpeechT is contributing to a more sustainable development of healthcare,
providing continuous and more rapid access to treatments even to individuals who cannot physically
reach hospitals and therapists’ ofices.</p>
      <p>The step forward that this research proposes concerns employing techniques to detect children’s
emotions to adapt to their emotional state and change the system’s behavior [12]. It is an AI-based feature
that fosters a symbiotic relationship between children and technology without replacing professionals’
expertise and supporting both parties. We consider the main characteristics of symbiosis to create a
prototype of a new version of e-SpeechT, as described in Section 3 [13, 14]. The objective is to make
e-SpeechT fall in the category of Symbiotic AI (SAI) systems, a specialization of Human-Centered AI
(HCAI) [14], that highlights a bidirectional relationship between the two parties where the strengths of
one compensate for the limitations of the other.</p>
      <p>The article is structured as follows: Section 2 describes e-SpeechT functionalities and explores
methods to assess emotions; Section 3 presents an overview of the new prototype, describing its
motivations and current state; Section 4 draws conclusions and future works of this study.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>2.1. eSpeechT</title>
        <p>
          This section provides an overview of e-SpeechT, with a focus on its already-existing AI-driven
capabilities. First, the platform’s structure and architecture is described, detailing the roles of professionals,
patients, and caregivers in the therapeutic process. In the second part, it explores diferent methods for
assessing emotions that are being considered for the integration of the additional AI-based component.
eSpeechT is a web application designed to support the treatment of speech disorders by assisting
therapists in managing therapies, enabling patients to complete assigned exercises, and allowing
caregivers to monitor treatment progress. The system has distinct areas, each tailored to the specific
needs of the involved actor (i.e. therapist, caregiver and patient), each reported below.
Therapist They can create diagnoses, manage therapies, monitor and assess their patients. The
system allows to create exercises based on three default categories defined a-priori with the aid of
a group of professionals that work in this field (i.e., Naming images, Minimum pair recognition, and
Repetition of words). These exercises can be packed together in a series or administered as standalone.
The system features a functionality that allows therapists in automatically correcting exercises, tailoring
feedback based on the severity of the child’s impairment through Machine Learning (ML) techniques.
The feature relies on a error tolerance threshold, set by the therapist, according to the severity of the
patient’s disorder (i.e., slight, moderate, and severe) [
          <xref ref-type="bibr" rid="ref4">11, 4</xref>
          ].
        </p>
        <p>Patients They are the subjects of speech therapy and, in this case, they are children aged from 4 to
8. They are given exercises to complete in order to improve their condition and address their speech
impairments [11]. As e-SpeechT deals with children, it employs gamification elements to make children
feel more at ease and comfortable while carrying out therapies can increase engagement levels and
distract them from the seriousness of the activity, leading to more positive outcomes [15].
Caregivers Since patients are not self-suficient and must be guided through this process, caregivers
play a crucial role in supporting children in performing the activities assigned by the therapist. They
act as a middleman between the two parties, being able to both guide their caretakers while monitoring
their progress. They can also intervene in the User Interface (UI) of the patient’s side of the system,
customizing its appearance to make it more welcoming and adjusted to their preferences [11].</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Techniques to assess children’s emotions</title>
        <p>Evaluating children’s motivation and monitoring their emotions when interacting with technology
can be a challenging activity. A rapid review of the literature was carried out to identify the main
techniques that can be employed in the context of this research: self-reports, behavioral observations,
physiological measurements, and the use of technological tools designed for emotion tracking [16]. An
overview of such techniques is provided below with respect to the extent to which they can support
the process of recognizing children’s emotions.</p>
        <p>• Self-Report Measures: these instruments are employed to assess children’s motivation and
emotional states when performing activities that require a cognitive efort. These tools often involve
questionnaires or surveys where children are asked to reflect on their feelings and engagement
levels. A crucial aspect to consider is the child’s age and cognitive development state to ensure
that they are not overwhelmed with the task [17].
• Behavioral Observations: direct observation of children’s behavior can provide insights into their
motivational and emotional states while performing their activities. The objects of the observation
are task persistence, facial expressions, and body language; it can be conducted by recoding the
child while performing the activity by individuals or through automated AI-based systems [18].
To observe and track patient’s behaviors over time, longitudinal studies can be conducted during
which therapists and/or caregiver can collect observations filling a diary.
• Physiological Measurements: physiological data, including heart rate, skin conductance, and facial
muscle activity, can ofer objective indicators of emotional arousal and motivation. Wearable
devices are increasingly used in educational settings to collect this data, providing real-time
insights into users’ emotional experiences [19].</p>
        <p>Implementing these methods requires careful consideration of ethical standards, especially concerning
children’s privacy and parents’ consent. In the context of this research, combining multiple assessment
approaches can provide a comprehensive understanding of children’s motivational and emotional
dynamics when carrying out activities to develop more engaging and efective educational tools.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposal of the Emotion Recognition Feature</title>
      <p>Following the Human-Centered Design (HCD) approach [20], an interview carried out with two speech
therapists. They both had been involved in previous user studies concerning e-SpeechT and, when
providing their thoughts regarding their experience with the system, they raised the need real-time
adaptation of the therapy to the children’s emotional state in order to keep them engaged. These
interviews acted as a springboard for the next development steps for e-SpeechT, suggesting that the
feature proposed by the therapists could significantly improve its medical efectiveness. The proposal
consists of a new component, powered by an AI model, that can fulfill the purpose of recognizing the
children’s emotions. This section illustrates the prototypes that were created, focusing on the User
Interfaces (UIs) that belong to this functionality, fostering its adaptability while enabling therapists to
monitor the level of attention during the treatment, helping them refine their interventions.</p>
      <sec id="sec-3-1">
        <title>3.1. AI-Act Driven Design</title>
        <p>The characteristics that we consider for the creation of new components are: Transparency, Fairness,
Automation Level, and Protection [13, 21, 22]. More specifically, Transparency, ensures AI operations
are understandable through explainability and interpretability while Fairness promotes unbiased and
equitable AI behaviors. The Automation Level principle aims at balancing human control with
automation to keep humans always on- or in-the-loop. At the same time, Protection, safeguards users’ privacy,
safety, and security. SAI reinforces Trustworthiness and Sustainability as characteristics of HCAI since
the first fosters reliable and ethical interactions, and the second minimizes environmental impact and
promoting long-term societal benefits [ 13, 23].</p>
        <p>In the case of e-SpeechT, the goal is to recognize emotions to adapt to children’s behavior, eliminating
unnecessary sessions and avoiding burnout for both parties. The final result that it is intended to obtain
consists of a system that embodies the four principles safeguarding humans in all of their dimensions.
It is important to underline that the principles are not only embodied in the user interfaces, but in the
whole interaction process.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Emotion Monitoring and Analysis</title>
        <p>During the creation of the therapy, medical professionals can enable emotion recognition functionality
for each exercise of the session they are creating. In case it is enabled, the system will record the session,
which can be performed both at home or in person, using the device’s camera and/or biometric sensors
to gather data to provide to the AI model. At the end of the therapy a single-question questionnaire
appears to directly ask the child how they feel. The answer is provided through a simleyometer [24, 9]
in order to let participants reply directly regardless of their age. Upon the session completion, therapists
can reproduce the session with an overlay displaying the detected emotional states at various points
as shown in Figure 1. In this proposal, a graphical representation of the emotions detected during
the session is provided: the y-axis represents diferent emotional states (e.g., happiness, frustration,
concentration); the x-axis represents time in minutes. At the same time, user studies are necessary to
identify a more suitable technique to illustrate the emotions detected, since providing access to this
information can have a strong impact on users’ perceptions [25].</p>
        <p>The button “Why this emotion?” provides an explanation to therapists about why specific emotions
were detected, enabling them to eventually correct the system’s classification through the “Correct”
button, as described in the following section.</p>
        <p>It is important to underline that, in order to protect users from potential harm and prevent
discriminatory behaviors, the access to data will be restricted to individuals involved in the project and possible
bias will be reduced as much as possible [26].</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Explainability and Human Intervention</title>
        <p>Providing explanations concerning the detected emotions is crucial for the Transparency principle. This
can include visual explanations, such as heatmaps or tracking matrices, in order highlight specific
frames where certain emotions were identified.</p>
        <p>In accordance with the Automation Level principle, therapists can review detected emotions and
validate predictions, as shown in Figure 2. It is intended to implement an Interactive Machine Learning
mechanism to re-train the AI model based on the expert’s validation in order to improve its accuracy
over time [27].</p>
        <p>During in-person therapy sessions, therapists can gather insights about a child’s emotions and later
use this information to provide additional valuable feedback to the system. This fosters a collaborative
approach where human expertise and AI capabilities complement each other, ultimately enhancing the
efectiveness of therapy.</p>
        <p>We highlight that the caregiver of the patient will be mandatorily asked the permission to activate
this functionality.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Treatment Personalization</title>
        <p>The integration of emotion detection and analyzing the emotional states of children during exercise
sessions, e-SpeechT can provide deeper insights into a child’s engagement and emotional responses,
allowing for more tailored therapeutic interventions. In addition, the adaptability of the system plays
an important role when performing the exercises: if signs of fatigue, distraction, or frequent mistakes
are detected, it can dynamically adjust the dificulty, ofering simpler tasks or interactive games to help
maintain engagement and motivation. By monitoring and analyzing children’s emotional states, the
system empowers therapists to adjust interventions based on real-time insights optimizing therapeutic
outcomes. The long-term goal is to support a sustainable scientific progress in the context of speech
therapy that integrates seamlessly into existing healthcare infrastructures to support long-term, ethical,
and cost-efective speech therapy solutions.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>The integration of AI-driven solutions in e-Health can bring strong advantages to the efectiveness
of therapies and treatments, improving individuals’ lives. At the same time, such systems must be
accurately developed, considering human needs, preferences, and cognitive models. The proposal
presented in this study concerns e-SpeechT, a web-application to support speech therapy. The goal is
to continuously monitor and analyze the emotional state of children during therapy sessions to let the
system adapt to their behavior and skills. The creation of more personalized therapy plans based on
real-time feedback can improve the long-term impact of the system increasing the eficiency of treatment
while ensuring the responsible use of resources. Thus, the possibility of providing remote support to
professionals and patients can be enhanced by the adaptation of the e-SpeechT to the ever-evolving
needs of children safeguarding the environment surrounding them.</p>
      <p>Future works concern the actual development of the AI model and the implementation of the UIs.
It is intended to build a multi-modal model that processes logs as tabular data and snapshots of the
child’s facial expressions while carrying out the exercises. By analyzing both structured evaluations
and therapist observations, the system can assess a child’s attention and emotional state, contributing
to a more tailored therapeutic approach. To assess the validity and eficacy of the proposed solutions
user studies will be conducted.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The research of Miriana Calvano, Antonio Curci, Rosa Lanzilotti, and Antonio Piccinno is supported by
the co-funding of the European Union - Next Generation EU: NRRP Initiative, Mission 4, Component 2,
Investment 1.3 – Partnerships extended to universities, research centers, companies, and research D.D.
MUR n. 341 del 15.03.2022 – Next Generation EU (PE0000013 – “Future Artificial Intelligence Research
– FAIR” - CUP: H97G22000210007).</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
J. Linkous, R. C. Merrell, T. Nesbitt, R. Poropatich, K. S. Rheuban, J. H. Sanders, A. R. Watson, R. S.
Weinstein, P. Yellowlees, The empirical foundations of telemedicine interventions for chronic
disease management, Telemedicine and e-Health 20 (2014) 769–800. URL: http://dx.doi.org/10.
1089/tmj.2014.9981. doi:10.1089/tmj.2014.9981.
[9] V. Barletta, M. Calvano, A. Curci, A. Pagano, A. Piccinno, Evaluation of ”Speech System” and
”Skill”: An Interaction Paradigm for Speech Therapy:, in: Proceedings of the 17th International
Joint Conference on Biomedical Engineering Systems and Technologies, SCITEPRESS - Science
and Technology Publications, Rome, Italy, 2024, pp. 568–576. URL: https://www.scitepress.org/
DigitalLibrary/Link.aspx?doi=10.5220/0012416700003657. doi:10.5220/0012416700003657.
[10] A. Aseeva, Liable and Sustainable by Design: A Toolbox for a Regulatory Compliant and Sustainable
Tech, Sustainability 16 (2023) 228. URL: https://www.mdpi.com/2071-1050/16/1/228. doi:10.3390/
su16010228.
[11] V. Barletta, M. Calvano, A. Curci, A. Piccinno, A new interactive paradigm for speech
therapy, in: J. Abdelnour Nocera, M. Kristín Lárusdóttir, H. Petrie, A. Piccinno, M. Winckler (Eds.),
Human-computer interaction – INTERACT 2023, Springer Nature Switzerland, Cham, 2023,
pp. 380–385. URL: https://link.springer.com/chapter/10.1007/978-3-031-42293-5_39. doi:10.1007/
978-3-031-42293-5_39.
[12] W. Wang, G. Athanasopoulos, G. Patsis, V. Enescu, H. Sahli, Real-Time Emotion Recognition from
Natural Bodily Expressions in Child-Robot Interaction, Springer International Publishing, 2015,
p. 424–435. URL: https://link.springer.com/chapter/10.1007/978-3-319-16199-0_30. doi:10.1007/
978-3-319-16199-0_30.
[13] High Level Expert Group on Artificial Intelligence, Assessment List for Trustworthy Artificial</p>
      <p>Intelligence (ALTAI) for self-assessment, 2020.
[14] G. Desolda, A. Esposito, R. Lanzilotti, A. Piccinno, M. F. Costabile, From human-centered
to symbiotic artificial intelligence: a focus on medical applications, Multimedia Tools and
Applications (2024). URL: https://link.springer.com/10.1007/s11042-024-20414-5. doi:10.1007/
s11042-024-20414-5.
[15] G. Desolda, R. Lanzilotti, A. Piccinno, V. Rossano, A System to Support Children in Speech
Therapies at Home, in: CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter,
ACM, Bolzano Italy, 2021, pp. 1–5. URL: https://dl.acm.org/doi/10.1145/3464385.3464745. doi:10.
1145/3464385.3464745.
[16] B. D. Jones, Motivating and Engaging Students Using Educational Technologies, Springer
International Publishing, 2020, p. 9–35. URL: http://dx.doi.org/10.1007/978-3-030-36119-8_2.
doi:10.1007/978-3-030-36119-8_2.
[17] E. Iacono, C. Becchimanzi, A. Brischetto, Emotional design: Afective evaluation methods to assess
the emotional response of 6-11 years children., in: S. Fukuda (Ed.), Afective and Pleasurable
Design, volume 41, AHFE International, 2022. doi:10.54941/ahfe1001785.
[18] A. Beynon, D. Hendry, C. Lund Rasmussen, A. L. Rohl, R. Eynon, G. Thomas, S. Stearne, A. Campbell,
C. Harris, J. Zabatiero, L. Straker, Measurement method options to investigate digital screen
technology use by children and adolescents: A narrative review, Children 11 (2024) 754. doi:10.
3390/children11070754.
[19] S. Ba, X. Hu, Measuring emotions in education using wearable devices: A systematic review,
Computers &amp; Education 200 (2023) 104797. URL: http://dx.doi.org/10.1016/j.compedu.2023.104797.
doi:10.1016/j.compedu.2023.104797.
[20] ISO/TC 159/SC 4 Ergonomics of human-system interaction, Ergonomics of Human-System
Interaction — Part 11: Usability: Definitions and Concepts, Standard ISO 9241-11:2018, International
Organization for Standardization (ISO), 2018. URL: https://www.iso.org/standard/63500.html.
[21] G. Pavlidis, Unlocking the black box: Analysing the EU artificial intelligence act’s framework for
explainability in AI, Law, Innovation and Technology 16 (2024) 293–308. doi:10.1080/17579961.
2024.2313795.
[22] J. Covelo De Abreu, The “Artificial Intelligence Act” Proposal on European e-Justice Domains
Through the Lens of User-Focused, User-Friendly and Efective Judicial Protection Principles, in:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. L.</given-names>
            <surname>Grigorenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Compton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Fuchs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. G.</given-names>
            <surname>Willcutt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Fletcher</surname>
          </string-name>
          , Understanding, educating, and
          <article-title>supporting children with specific learning disabilities: 50 years of science and practice</article-title>
          ,
          <source>American Psychologist</source>
          <volume>75</volume>
          (
          <year>2020</year>
          )
          <fpage>37</fpage>
          -
          <lpage>51</lpage>
          . doi:
          <volume>10</volume>
          .1037/amp0000452.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Ramalingam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Karunamurthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T. Amalraj</given-names>
            <surname>Victoire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pavithra</surname>
          </string-name>
          ,
          <source>Impact of Artificial Intelligence on Healthcare: A Review of Current Applications and Future Possibilities</source>
          , Quing:
          <source>International Journal of Innovative Research in Science and Engineering</source>
          <volume>2</volume>
          (
          <year>2023</year>
          )
          <fpage>37</fpage>
          -
          <lpage>49</lpage>
          . URL: https://quingpublications.com/journals/ijirse/2023/2/2/se230522005.pdf.
          <source>doi:10.54368/qijirse. 2.2</source>
          .0005.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Barletta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Calvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Curci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <article-title>A Protocol to Assess Usability and Feasibility of e-SpeechT, a Web-based System Supporting Speech Therapies:</article-title>
          ,
          <source>in: Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies, SCITEPRESS - Science and Technology Publications</source>
          , Lisbon, Portugal,
          <year>2023</year>
          , pp.
          <fpage>546</fpage>
          -
          <lpage>553</lpage>
          . URL: https://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0011893300003414. doi:
          <volume>10</volume>
          . 5220/0011893300003414.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Calvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Curci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pagano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <article-title>Speech Therapy Supported by AI and Smart Assistants</article-title>
          , in: A.
          <string-name>
            <surname>Jedlitschka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Janes</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Lenarduzzi</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 24th International Conference on Product-Focused Software Process Improvement</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>97</fpage>
          -
          <lpage>104</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -49269-3_
          <fpage>10</fpage>
          . doi:
          <volume>10</volume>
          . 1007/978-3-
          <fpage>031</fpage>
          -49269-3_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R. DePompei</given-names>
            ,
            <surname>Speech-Language</surname>
          </string-name>
          <string-name>
            <surname>Therapy</surname>
          </string-name>
          , Springer New York, New York, NY,
          <year>2011</year>
          , pp.
          <fpage>2343</fpage>
          -
          <lpage>2344</lpage>
          . URL: https://doi.org/10.1007/978-0-
          <fpage>387</fpage>
          -79948-3_
          <fpage>925</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-0-
          <fpage>387</fpage>
          -79948-3_
          <fpage>925</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Shriberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kwiatkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Mabie</surname>
          </string-name>
          ,
          <article-title>Estimates of the prevalence of motor speech disorders in children with idiopathic speech delay</article-title>
          ,
          <source>Clinical Linguistics &amp; Phonetics</source>
          <volume>33</volume>
          (
          <year>2019</year>
          )
          <fpage>679</fpage>
          -
          <lpage>706</lpage>
          . URL: https://www.tandfonline.com/doi/full/10.1080/02699206.
          <year>2019</year>
          .
          <volume>1595731</volume>
          . doi:
          <volume>10</volume>
          .1080/02699206.
          <year>2019</year>
          .
          <volume>1595731</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>McKean</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Bloch,</surname>
          </string-name>
          <article-title>The application of technology in speech and language therapy</article-title>
          ,
          <source>International Journal of Language &amp; Communication Disorders</source>
          <volume>54</volume>
          (
          <year>2019</year>
          )
          <fpage>157</fpage>
          -
          <lpage>158</lpage>
          . URL: https://onlinelibrary. wiley.com/doi/10.1111/
          <fpage>1460</fpage>
          -
          <lpage>6984</lpage>
          .12464. doi:
          <volume>10</volume>
          .1111/
          <fpage>1460</fpage>
          -
          <lpage>6984</lpage>
          .
          <fpage>12464</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Bashshur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. W.</given-names>
            <surname>Shannon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Alverson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Antoniotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. G.</given-names>
            <surname>Barsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bashshur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Brown</surname>
          </string-name>
          , M. J.
          <string-name>
            <surname>Coye</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          <string-name>
            <surname>Doarn</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ferguson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Grigsby</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          <string-name>
            <surname>Krupinski</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          <string-name>
            <surname>Kvedar</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>