<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Dental Anomaly Recognition on Radiographs using LLMs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dragos A. Gavrus</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ioana G. Ciuciu</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Babes-Bolyai University</institution>
          ,
          <addr-line>Cluj-Napoca</addr-line>
          ,
          <country country="RO">Romania</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Manual analysis of dental radiographs is time-consuming and prone to error, requiring significant expertise. This project proposes an AI-based system that leverages deep learning and large language models (LLMs) to automate the detection and classification of dental anomalies. The system aids clinicians by identifying anatomical structures and pathological findings such as caries, implants, bone loss, apical lesions, and restorative treatments. It improves diagnostic consistency and speeds up clinical decision-making. Furthermore, a Retrieval-Augmented Generation (RAG) chatbot powered by a large language model enables personalized, accessible explanations to assist patients in understanding their diagnoses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Dental anomalies</kwd>
        <kwd>radiograph analysis</kwd>
        <kwd>retrieval-augumented generation</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Motivation</title>
      <p>Dental diagnostics rely heavily on radiographic imaging to identify anomalies such as caries, implants,
bone loss, apical lesions, or root canal failures. However, manual interpretation of dental radiographs
is time-consuming, error-prone, and requires substantial clinical expertise. These challenges often
lead to inconsistent diagnoses and delayed treatment decisions, particularly in high-throughput or
resource-limited clinical environments.</p>
      <p>
        To mitigate these issues, we introduce Orthovision, an AI-powered web platform for automated
dental anomaly recognition. The system integrates computer vision techniques, specifically
convolutional neural networks (CNNs) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], for X-ray image segmentation and object detection, with large
language models (LLMs) for generating human-readable diagnostic explanations. By combining visual
deep learning with retrieval-augmented natural language processing, Orthovision aims to enhance
diagnostic precision, reduce clinical burden, and improve communication with patients.
      </p>
      <p>
        While AI tools have been increasingly adopted in medical imaging, existing dental AI solutions [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]
often lack personalization, integration of patient history, and interactive capabilities. Most systems
focus solely on image analysis, without ofering interpretability or follow-up explanations tailored to
the patient’s context.
      </p>
      <p>
        Orthovision is motivated by the need to move beyond static diagnostic systems [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] toward intelligent
platforms that assist both practitioners and patients. By embedding a Retrieval-Augmented Generation
(RAG)-based chatbot, our system enables personalized communication based on past radiographs and
documented anomalies. This enhances understanding for patients and provides decision support for
clinicians, particularly in practices where time or expertise may be limited.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem Statement</title>
      <p>
        Although recent advances in machine learning have significantly improved medical image analysis,
existing frameworks for dental radiograph interpretation still face several key limitations. Many
systems are restricted to single-task models that detect only a narrow set of conditions [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], lacking the
lfexibility to perform robust multi-class anomaly detection across variable image qualities. Additionally,
few solutions ofer precise tooth-level localization [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], which is essential for generating structured
diagnostic reports that link anomalies to specific anatomical regions.
      </p>
      <p>Equally important, most tools lack meaningful patient-specific explanations, ignoring historical
data and ofering little accessible, tailored feedback. They also typically lack end-to-end deployment,
focusing on isolated inference models without the infrastructure for secure user management, image
processing, reporting, and patient interaction in a unified system.</p>
      <p>
        This work addresses these limitations by proposing a full-stack system that combines YOLOv11-based
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] object detection for both anomaly and tooth localization with a DeepSeek-V3-powered
RetrievalAugmented Generation (RAG) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] chatbot for patient education. The integration of deep learning and
large language models enables both diagnostic precision and interpretability, ofering a practical and
explainable solution for modern dental care.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Approach</title>
      <p>The proposed system, Orthovision, is a application designed to automate the detection and explanation
of dental anomalies from X-ray images. It integrates computer vision models for image analysis and
large language models for generating context-aware, human-readable diagnostics.</p>
      <p>The app uses a React/TypeScript frontend and a Flask backend to support X-ray uploads, anomaly
detection, chatbot interaction, and data storage with Supabase and Qdrant.</p>
      <sec id="sec-3-1">
        <title>3.1. Image Analysis Pipeline</title>
        <p>The image analysis pipeline in Orthovision is composed of two dedicated YOLOv11 models. The first
model is trained to detect and localize each of the 32 human teeth within panoramic dental radiographs.
This enables tooth-specific indexing and supports the generation of structured diagnostic reports. The
second model focuses on detecting a wide range of dental anomalies, including caries, implants, apical
lesions, bone loss, and other conditions. It operates by drawing class-specific bounding boxes around
identified regions of interest.</p>
        <p>To associate anomalies with individual teeth, bounding boxes from the anomaly and tooth detection
models are aligned using the IoU metric. K-Means clustering organizes teeth into upper and lower
arches for clearer visualization. Although a U-Net segmentation model was explored, the final pipeline
uses only YOLOv11-based detectors for better speed and accuracy.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Report Generation</title>
        <p>The backend synthesizes anomaly and tooth detection results into a detailed diagnostic report. Each
identified anomaly is linked to the afected tooth number, enabling dentists and patients to trace
conditions visually and textually. A k-means clustering algorithm is used to assign teeth to the correct
dental arch. An example of a generated report is shown in Figure 1.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Chatbot Integration</title>
        <p>
          A Retrieval-Augmented Generation (RAG) chatbot, powered by the DeepSeek-V3 model hosted on
Hugging Face, is used to provide explanations based on prior radiographs and dental literature. The
chatbot performs a k-nearest neighbors (k-NN) search in Qdrant over embedded clinical documents
related to 14 dental anomaly types [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref9">9–20</xref>
          ]. Retrieved contexts are combined with patient-specific reports
to generate personalized responses to user queries.
        </p>
        <p>For document embedding, the system uses the all-MiniLM-L6-v2 sentence transformer, which
provides a good trade-of between speed and semantic accuracy for dense vector retrieval in the Qdrant
database.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>This section presents the evaluation of the core components deployed in the system: the YOLOv11-based
models for anomaly and tooth detection, and the Retrieval-Augmented Generation (RAG) chatbot built
on DeepSeek-V3. Results are discussed in terms of detection accuracy, clinical relevance, and usability
in practice.</p>
      <sec id="sec-4-1">
        <title>4.1. Dental Anomaly Detection with YOLOv11</title>
        <p>The YOLOv11 anomaly detection model was trained using a proprietary dataset of 1,000 annotated
panoramic dental radiographs covering 14 clinically relevant conditions. The model achieved a mean
average precision (mAP) of 0.369 at an IoU threshold of 0.5, and 0.167 over the mAP@0.5:0.95 range. It
performed particularly well on classes such as surgical root debridement, endodontic treatment, and
dental implants, where visual characteristics were distinct and annotations consistent. These results
suggest the model is robust for commonly encountered pathologies in clinical practice.</p>
        <p>Performance was notably lower for underrepresented and hard-to-distinguish conditions like apical
scars and bone loss, highlighting the need for balanced datasets and improved feature engineering or
multi-modal integration. Still, the model’s real-time detection of multiple anomalies with reasonable
accuracy supports automating routine diagnostics, as illustrated in Figure 2.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Tooth Identification with YOLOv11</title>
        <p>To enable anomaly-to-tooth linking, a second YOLOv11 model was trained to detect and index all 32
teeth individually. The model achieved outstanding performance, with an mAP@0.5 of 0.975 and an
mAP@0.5:0.95 of 0.701. Both precision and recall exceeded 93%, and predictions generalized well across
test images with varying orientations and densities.</p>
        <p>This precise tooth-level localization allowed the system to associate each anomaly with a specific
tooth and generate indexed, structured diagnostic reports. The resulting overlays and summaries (see
Figure 3) provide clinicians with a clear overview of afected regions, while also making it easier to
explain findings to patients. The integration of this module significantly enhances the interpretability
and traceability of the system’s outputs.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. RAG Chatbot Performance</title>
        <p>
          The chatbot is implemented using a Retrieval-Augmented Generation (RAG) architecture that
combines dense semantic retrieval with natural language generation. A curated set of 12 clinical
documents—covering implants [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref9">9–20</xref>
          ], caries, bone loss, and other anomaly types—was embedded using a
transformer-based sentence encoder and stored in a Qdrant vector database. When a user submits a
query, the system retrieves the top relevant document segments and integrates them into a prompt for
DeepSeek-V3, a high-performance open-source LLM hosted via Hugging Face.
        </p>
        <p>The chatbot was evaluated on a set of 20 queries simulating real-world patient questions about
diagnoses and anomalies. The system produced context-aware, clinically relevant responses in 85% of
cases. Retrieved context chunks were topically appropriate in 89% of interactions, indicating high-quality
semantic matching. The LLM was especially efective in generating coherent, empathetic explanations
for common anomalies such as implants, root fillings, and caries. These responses often incorporated
historical context from the user’s past reports, enhancing personalization and continuity of care.</p>
        <p>Despite strong performance, some limitations were observed. The chatbot occasionally produced
verbose responses or exhibited cautious overqualification in rare cases. Additionally, the system
currently depends on remote inference APIs, which introduces latency and limits ofline availability.
Future improvements may include integrating on-device LLMs or caching frequent queries to improve
responsiveness.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Discussion</title>
        <p>Taken together, the results indicate that the system is capable of performing high-precision dental
anomaly detection, reliable tooth localization, and interactive diagnostic explanation via natural
language. The combination of YOLOv11 for vision tasks and DeepSeek-V3 for language generation supports
both clinical decision-making and patient engagement. While rare anomalies remain challenging, and
scalability could be further optimized, the current platform demonstrates strong real-world potential
for augmenting radiograph-based dental workflows. Continued expansion of the training datasets,
anomaly categories, and multilingual support are promising avenues for future development.</p>
        <p>Integrating logic-based techniques would enhance the system’s reliability and precision by using
explicit rules and ontological reasoning to represent the structured nature of the dental domain, rather
than relying on the current heuristic methods like Intersection over Union (IoU) and k-means clustering.
This improvement would lead to more accurate structured reports and allow the chatbot to provide
more concise, precise, and contextually relevant explanations, addressing issues of verbosity and
over-qualification.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper presented Orthovision, a full-stack AI-driven web platform for automated dental anomaly
recognition and patient-oriented explanation. By integrating YOLOv11 models for tooth and anomaly
detection with a Retrieval-Augmented Generation chatbot powered by DeepSeek-V3, the system
enables both accurate diagnostics and personalized natural language interactions. Experimental results
confirmed the efectiveness of the object detection models across most anomaly and tooth classes, while
the chatbot demonstrated strong performance in delivering clinically grounded, user-friendly responses.</p>
      <p>The proposed approach bridges the gap between AI-assisted imaging and patient communication,
ofering a scalable and explainable solution for modern dental care. Future work includes expanding the
training data, improving detection of rare anomalies, and enhancing chatbot responsiveness through
local inference, multilingual support and ontology-based reasoning.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We gratefully acknowledge the support and resources that contributed to the development of this thesis.
We thank Prof. Mihaela Hedes, iu for providing access to proprietary dental radiograph data through
a research collaboration with UMF University, which played a critical role in training and evaluating
the models.</p>
      <p>Special thanks to Lect. Dr. Laura Diosan for her guidance and valuable input on the machine
learning components of this work.</p>
      <p>Declaration on Generative AI During the preparation of this work the authors used Chatgpt in
order to improve the clarity and quality of expression. After using this tool, the authors reviewed and
edited the content as needed and take full responsibility for the content of the paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bhatt</surname>
          </string-name>
          , et al.,
          <article-title>Review of deep learning: concepts, cnn architectures, challenges, applications, future directions</article-title>
          ,
          <source>Journal of Big Data</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Diagnocat</surname>
          </string-name>
          , Diagnocat, https://www.diagnocat.com/,
          <year>2020</year>
          . Last accessed:
          <volume>05</volume>
          .
          <fpage>09</fpage>
          .
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Denti.AI</surname>
          </string-name>
          , Denti.ai detect, https://www.denti.
          <source>ai/</source>
          ,
          <year>2024</year>
          . Last accessed:
          <volume>05</volume>
          .
          <fpage>09</fpage>
          .
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Overjet</surname>
          </string-name>
          , Overjet, https://www.overjet.com/,
          <year>2018</year>
          . Last accessed:
          <volume>05</volume>
          .
          <fpage>09</fpage>
          .
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] CephX, Cephx, https://www.cephx.com/,
          <year>2024</year>
          . Last accessed:
          <volume>05</volume>
          .
          <fpage>09</fpage>
          .
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6] CranioCatch, Craniocatch, https://www.craniocatch.com/,
          <year>2024</year>
          . Last accessed:
          <volume>05</volume>
          .
          <fpage>09</fpage>
          .
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <article-title>Yolov11 explained: Next-level object detection with enhanced speed and accuracy</article-title>
          , https://medium.com/@nikhil-rao-
          <volume>20</volume>
          / yolov11-explained
          <article-title>-next-level-object-detection-with-enhanced-speed-and-accuracy-</article-title>
          <string-name>
            <surname>2dbe2d376f71</surname>
          </string-name>
          ,
          <year>2024</year>
          . Accessed:
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piktus</surname>
          </string-name>
          , et al.,
          <article-title>Retrieval-augmented generation for knowledge-intensive nlp tasks</article-title>
          ,
          <source>in: EMNLP</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Veiga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Figueiredo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pina</surname>
          </string-name>
          ,
          <article-title>Dental caries: A review</article-title>
          ,
          <source>Journal of Dental and Oral Health</source>
          (
          <year>2016</year>
          ). URL: https://ciencia.ucp.pt/files/37440728/dental_caries_a_review.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Heboyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. L.</given-names>
            <surname>Avetisyan</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Vardanyan</surname>
          </string-name>
          , et al.,
          <article-title>Tooth root resorption: A review</article-title>
          ,
          <source>Science Progress</source>
          <volume>105</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          . URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10358711/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Peeran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Alghamdi</surname>
          </string-name>
          , et al.,
          <article-title>Furcation involvement in periodontal disease: A narrative review</article-title>
          ,
          <source>Cureus</source>
          <volume>16</volume>
          (
          <year>2024</year>
          )
          <article-title>e55924</article-title>
          . URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11004587/. doi:
          <volume>10</volume>
          .7759/cureus.55924.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Argáez</surname>
          </string-name>
          ,
          <article-title>Endodontic Therapy Interventions for Root Canal Failure: A Review of Clinical Efectiveness and Guidelines, CADTH Rapid Response Reports, Canadian Agency for Drugs and Technologies in Health (CADTH</article-title>
          ),
          <year>2017</year>
          . URL: https://www.ncbi.nlm.nih.gov/books/NBK470661/.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Blake</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Barros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Fong</surname>
          </string-name>
          , E. Dhamija, Apical Periodontitis,
          <source>StatPearls Publishing</source>
          ,
          <year>2023</year>
          . URL: https://www.ncbi.nlm.nih.gov/books/NBK589656/.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>C.-W. Lee</surname>
          </string-name>
          , et al.,
          <article-title>Clinicopathological study of periapical scars</article-title>
          ,
          <source>Journal of Dental Sciences</source>
          <volume>16</volume>
          (
          <year>2021</year>
          )
          <fpage>1140</fpage>
          -
          <lpage>1145</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jds.
          <year>2021</year>
          .
          <volume>05</volume>
          .008.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Malagnino</surname>
          </string-name>
          , et al.,
          <article-title>The fate of overfilling in root canal treatments with long-term follow-up: A case series</article-title>
          ,
          <source>Restorative Dentistry &amp; Endodontics</source>
          <volume>46</volume>
          (
          <year>2021</year>
          )
          <article-title>e27</article-title>
          . URL: https://pubmed.ncbi.nlm.nih. gov/33908464/. doi:
          <volume>10</volume>
          .5395/rde.
          <year>2021</year>
          .
          <volume>46</volume>
          .
          <year>e27</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Santosh</surname>
          </string-name>
          ,
          <article-title>Impacted mandibular third molars: Review of literature and a proposed classification</article-title>
          ,
          <source>Annals of Medical and Health Sciences Research</source>
          <volume>5</volume>
          (
          <year>2015</year>
          )
          <fpage>229</fpage>
          -
          <lpage>234</lpage>
          . URL: https://www.ncbi.nlm.nih. gov/pmc/articles/PMC4512113/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gupta</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Dental</surname>
            <given-names>Implants</given-names>
          </string-name>
          ,
          <source>StatPearls Publishing</source>
          ,
          <year>2025</year>
          . URL: https://www.ncbi.nlm.nih.gov/ books/NBK556031/.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.-C.</given-names>
            <surname>Chang</surname>
          </string-name>
          , et al.,
          <article-title>Recent clinical treatment and basic research on the alveolar bone</article-title>
          ,
          <source>Biomedicines</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <article-title>843</article-title>
          . URL: https://www.mdpi.com/2227-9059/11/3/843. doi:
          <volume>10</volume>
          .3390/ biomedicines11030843.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , Oral Surgery, Extraction of Roots, StatPearls Publishing,
          <source>Treasure Island (FL)</source>
          ,
          <year>2023</year>
          . Available from: https://www.ncbi.nlm.nih.gov/books/NBK589696/.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>P.</given-names>
            <surname>Desai</surname>
          </string-name>
          , E. Solomon, Orthodontics, Malocclusion, StatPearls Publishing,
          <source>Treasure Island (FL)</source>
          ,
          <year>2023</year>
          . Available from: https://www.ncbi.nlm.nih.gov/books/NBK592395/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>