<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleh Basystiuk</string-name>
          <email>oleh.a.basystiuk@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nataliia Melnykova</string-name>
          <email>nataliia.i.melnykova@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Artificial Intelligence, Institute of Computer Science and Information Technologies</institution>
          ,
          <addr-line>Lviv</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Polytechnic National University</institution>
          ,
          <addr-line>Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The integration of multimodal data has emerged as a game-changing strategy in advancing smart healthcare, allowing for a holistic comprehension of patient health and tailored treatment strategies. This exploration delves into the journey from raw data to insightful wisdom, emphasizing the fusion of various data modalities, notably in CT scans or retinal photographs, to drive smart healthcare innovations. Within this review, we comprehensively examine the fusion of diverse medical data modalities, aiming to unlock a deeper understanding of patient health. Our focus spans various fusion methodologies-from feature selection to rule-based systems, machine learning, deep learning, and natural language processing. Furthermore, we explore the challenges inherent in fusing multimodal data in healthcare settings. The central focus revolves around determining the most efficient and accurate approach, crucial for future research endeavors in Ukrainian language audio-to-text conversion systems. The goal is to ascertain the most effective strategy that will serve as the foundation for further advancements in this domain. Speech-to-Text, speech recognition, video recognition, audio recognition, multimodal data, Proceedings</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>machine learning, deep neural network, hybrid approaches</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>In the rapidly evolving landscape of smart healthcare, where innovation and data-centric
methodologies are reshaping the industry, the convergence of multimodal data stands out as a pivotal
force. This paper delves deep into the realm of multimodal medical data fusion, offering a thorough
investigation into how diverse data streams converge to generate meaningful insights. It navigates
through the intricate process—from initial data collection to translating it into actionable intelligence—
depicted through the detailed four-level pyramid showcased in Figure 1.</p>
      <p>The Data Information Knowledge</p>
      <p>
        Wisdom (DIKW) model serves as a conceptual roadmap
illustrating how data evolves into profound wisdom [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It elucidates a transformative journey where
raw data undergoes a metamorphosis into
      </p>
      <p>
        meaningful information, knowledge, and eventually,
wisdom—empowering informed decision-making and adept problem-solving [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This
model
recognizes the inadequacy of raw data in driving insights and actions; it underscores the necessity to
process, structure, and contextualize data to extract invaluable information. This synthesized
information converges with existing knowledge, fostering an understanding that begets knowledge
itself. This accrued knowledge becomes a practical tool for making informed decisions and navigating
intricate challenges, ultimately culminating in the attainment of wisdom.
      </p>
      <p>2023 Copyright for this paper by its authors.
CEUR</p>
      <p>ceur-ws.org</p>
      <p>The key contributions of this paper are:
• Utilizing and adapting the established DIKW conceptual model to delineate the evolution
from data to information to knowledge to wisdom, specifically in the domain of multimodal
fusion for intelligent healthcare.
• Current techniques for representing multimodal data in applications utilizing machine
learning methodologies, based on Ukrainian language-based dataset.
• Proposing a comprehensive workflow for handling multimodal medical data, such as video,
audio, and textual sources, using the Sequence-to-Sequence model flow.
• Analyzing the challenges and proposing solutions for the algorithmic time complexity
associated with multimodal data handling, focusing on video, audio, and textual medical
data, while aligning with the proposed approach, to steer future research paths.</p>
      <p>The following sections of this paper are structures in the following order: Section 2 overview
existing techniques of multimodal data representation in applications, based on machine learning
approaches. Section 3 propose multimodal handling flow with encompassing video, audio, and textual
medical data, using Sequence-to-Sequence model flow. Section 4 introduces a comparison of
multimodal handling algorithm time complexity, based on video, audio, and textual medical data.
Section 5 and Section 6 stands for discussion and comparison and evaluation the results.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related works</title>
      <p>
        This research project will use a qualitative research approach. Data will be collected through a
review of relevant literature, case studies, and interviews with experts in the field. The data will be
analyzed using thematic analysis to identify key themes related to the use of multimodal and artificial
intelligence approaches in the distance education process, namely to recognize video streams of
completed course assignments [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Current AI applications in medicine have typically focused on specific tasks using singular data
sources, like a CT scan or retinal photograph. However, clinicians rely on a multitude of data types and
modalities to make diagnoses, evaluate prognosis, and determine treatment plans [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Moreover,
existing AI assessments capture only a snapshot in time, failing to perceive health as an ongoing
continuum.
      </p>
      <p>
        The potential for AI models extends far beyond these limitations, envisioning their ability to leverage
diverse data sources, including those beyond most clinicians' scope [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Multimodal AI models
integrating video, imaging, text and audio clinical medical data promise. They stand to bridge this gap
by enabling personalized medicine, real-time pandemic surveillance, digital clinical trials, and virtual
health coaching on an unprecedented scale [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        This review examines the vast potential of multimodal datasets in healthcare, emphasizing the
transformative possibilities they offer. By incorporating audio and video data into this multimodal
landscape, leveraging machine learning techniques, medical research could attain a new level of depth
and accuracy [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For instance, integrating video data from patient interactions or audio data from
diagnostic interviews could enrich these models, leading to more holistic and precise healthcare
solutions.
      </p>
      <p>For video content processing, the application and categorization of methods are proposed, including
sequential comparison, clustering-based global comparison, and event/object-based methods. The most
valuable compare and contrast techniques include sequence finding, classification, frame decoding, and
evaluation feature detection. The most optimal use of methods is based on artificial intelligence and
machine learning, where deep learning methods will be more effective than conventional methods.</p>
      <p>
        However, this integration presents significant challenges. Nonetheless, with innovative strategies
and advancements in machine learning, overcoming these hurdles becomes increasingly feasible. The
potential benefits of multimodal data utilization in medical research are vast, heralding a paradigm shift
toward more comprehensive, personalized, and effective healthcare solutions [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The research project is expected to provide a system for evaluating and predicting student success
based on feedback. It will identify the different approaches used, their effectiveness, and the challenges
and opportunities associated with their use. It is expected that the implementation of the proposed
approaches will significantly improve information systems for evaluating the quality of learning
outcomes, which will be used to analyze video content and textual content of answers. The project will
create an approach for providing relevant feedback, based on which suggestions for improving the
courses and evaluating their overall effectiveness. In addition, to influence the success of students
during the distance learning process and the quality of their assimilation of subject materials.</p>
      <p>Our roadmap includes devising an expert-level system tailored specifically for hybrid language
translation within the medical sphere. By incorporating multimodal data, including audio and video,
alongside machine learning techniques, we aim to pioneer a transformative approach that could redefine
diagnostic accuracy, treatment protocols, and overall healthcare practices. This pursuit represents an
innovative frontier poised to revolutionize medical research and application.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Methods</title>
      <p>
        One standout deep learning model, known as Seq2Seq (sequence-to-sequence), has demonstrated
remarkable proficiency in tasks like machine translation and text summarization. These models operate
on the principle that the decoder's attention layers can access only preceding words in the input
sequence, while the encoder's attention layers can access the entirety of the original phrase, fostering
connections that enable an RNN (Recurrent Neural Network) to retain and replicate an entire sequence
of reactions to a stimulus [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>Initially, the sequence flows through the encoder, comprising RNNs, culminating in a final
embedding at its conclusion. Subsequently, the decoder utilizes this embedding to predict subsequent
sequences. This process involves using prior hidden states to forecast the succeeding instances within
the sequence, as illustrated Sequence-to-Sequence model flow showcased in Figure 2.</p>
      <p>
        To optimize accuracy and time efficiency tailored to our specific context, an in-depth investigation
is essential, exploring the methods delineated earlier. In our prior research, the Sequence-to-Sequence
approach based on Recurrent Neural Networks emerged as the most effective method, particularly when
examining the libraries utilized in constructing machine learning methodologies. Notably, TensorFlow,
Keras, and PyTorch stand out as the most widely employed machine learning libraries in RNN-based
language translation methods. Model play a crucial role in enhancing and effectiveness of sequence
prediction and language translation tasks within our research domain [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Results</title>
      <p>The research project is expected to provide a system for interconnect different data sources, from
multimodal data approach, such as audio, video and text into our structure system, with ability to
reinforce and aggregate these data. Advancements in speech-to-text technology are rapidly progressing,
opening avenues for scaling its application beyond its current horizons. This expansion is crucial not
just for accuracy and reliability but also for the future credibility and coherence of research endeavors
across diverse fields. The groundwork laid by this research serves as a cornerstone for subsequent
investigations, setting the stage for future platforms to tackle multifaceted challenges.</p>
      <p>The landscape of health data is diverse, posing multifaceted challenges in gathering, linking, and
annotating these multidimensional datasets. These medical datasets vary across several dimensions—
sample size, depth of phenotyping, follow-up intervals, participant interactions, heterogeneity,
standardization, and data linkage. While advancements in science and technology facilitate data
collection and phenotyping, striking a balance among these dataset features remains a challenge. For
instance, while larger sample sizes are ideal for training AI models, achieving deep phenotyping and
sustained longitudinal follow-up escalates costs significantly, making it financially impractical without
automated data collection methods.</p>
      <p>In the realm of medicine, current AI applications tend to focus on specific tasks using singular data
sources like CT scans or retinal photographs. This contrasts starkly with clinicians who rely on a diverse
array of data sources and modalities to diagnose, forecast outcomes, and devise treatment plans.
Moreover, existing AI assessments typically offer singular snapshots, capturing a moment in time rather
than perceiving health as an ongoing continuum.</p>
      <p>Biomedical data often grapples with a common issue: a notable prevalence of missing information.
Although excluding patients lacking data prior to training is feasible in certain scenarios, doing so might
introduce selection bias when external factors contribute to these gaps. Consequently, employing
statistical methods like multiple imputation becomes a preferable approach to tackle these voids.
Imputation stands as a crucial preprocessing stage in numerous biomedical disciplines, spanning from
genomics to clinical data analysis.</p>
      <p>However, in theory, AI models possess the potential to harness all available data sources, including
those beyond the reach of most clinicians, like genomic medicine. The evolution of multimodal AI
models that amalgamate data across various modalities—spanning biosensors, genomics, epigenomics,
proteomics, microbiomes, metabolomics, imaging, textual, clinical, social determinants, and
environmental data—holds the promise of narrowing this gap. These advanced models pave the way
for diverse applications, including handling multimodal medical data, such as video, audio, and textual
sources, using the Sequence-to-Sequence model flow showcased in Table 1.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Discussion</title>
      <p>
        The research project is expected to provide a system for interconnect different data sources, from
multimodal data approach, such as audio, video and text into our structure system, with ability to
reinforce and aggregate these data. Advancements in speech-to-text technology are rapidly progressing,
opening avenues for scaling its application beyond its current horizons. This expansion is crucial not
just for accuracy and reliability but also for the future credibility and coherence of research endeavors
across diverse fields. The groundwork laid by this research serves as a cornerstone for subsequent
investigations, setting the stage for future platforms to tackle multifaceted challenges [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>In our preliminary steps, we delineate specific fields and tasks, anticipating the demands future
platforms will confront. Additionally, we delve into analyzing neural network training techniques that
extend beyond machine translation applications. Our findings from Section 3 highlight the potential of
a hybrid approach employing recurrent networks within a sequence-to-sequence model. This approach
holds promise for yielding optimal outcomes, coupling high time efficiency with commendable
accuracy rates.</p>
      <p>
        The utilization of recurrent neural networks (RNNs) has garnered significant traction, particularly
in audio-to-text translation. Foreseeing advancements in this domain, we anticipate substantial
enhancements in the near future. Harnessing multimodal data—integrating not only audio but also video
data—alongside machine learning techniques stands as a promising frontier in medical research. This
synergy could revolutionize diagnostic accuracy, treatment methodologies, and overall healthcare
practices, paving the way for groundbreaking advancements in the field [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion</title>
      <p>In conclusion, our exploration into multimodal Ukrainian medical data showcased the immense
potential of integrating audio and video data within machine learning-based systems, particularly
employing RNNs (Recurrent Neural Networks). This convergence offers a transformative pathway for
reinforcing medical research endeavors, elevating diagnostic accuracy, treatment methodologies, and
healthcare practices.</p>
      <p>By harnessing the power of RNNs alongside multimodal data, we've unveiled new horizons for
enhanced understanding and analysis within medical research. The fusion of audio and video data
within this framework promises a more comprehensive view of patient health, potentially
revolutionizing diagnostic precision and personalized treatment plans. Increasing accuracy of handling
medical data and cutting medical time to handle the paperwork after medical research, such as retinal
photograph, or scans based on computed tomography (CT) results.</p>
      <p>Moving forward, leveraging these machine learning approaches in multimodal Ukrainian medical
data sets the stage for groundbreaking advancements, paving the way for innovative applications and
more effective healthcare solutions. The fusion of audio, video, and textual data within RNN
frameworks not only strengthens the foundations of medical research but also offers a promising avenue
for the future of healthcare practices.</p>
      <p>Funding Statement: This research is funded by the EURIZON Fellowship Program: “Remote
Research Grants for Ukrainian Researchers”, grand № 138.</p>
    </sec>
    <sec id="sec-8">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Smit</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.
          <article-title>CheXbert: Combining automatic labelers and expert annotations for accurate radiology report labeling using BERT</article-title>
          .
          <source>in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing 1500-1519</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Willemink</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          et al.
          <article-title>Preparing medical imaging data for machine learning</article-title>
          .
          <source>Radiology 295</source>
          ,
          <fpage>4</fpage>
          -
          <lpage>15</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Havryliuk</surname>
          </string-name>
          , et. al.,
          <article-title>"Check for updates Interactive Information System for Automated Identification of Operator Personnel by Schulte Tables Based on Individual Time Series"</article-title>
          ,
          <source>Advances in Artificial Systems for Logistics Engineering III</source>
          <volume>180</volume>
          ,
          <year>372</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Cirillo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          et al.
          <article-title>Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare</article-title>
          .
          <source>NPJ Digit. Med</source>
          .
          <volume>3</volume>
          ,
          <issue>81</issue>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Damask</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.
          <article-title>Patients with high genome-wide polygenic risk scores for coronary artery disease may receive greater clinical benefit from alirocumab treatment in the ODYSSEY OUTCOMES trial</article-title>
          .
          <source>Circulation 141</source>
          , pp.
          <fpage>624</fpage>
          -
          <lpage>636</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Vyas</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eisenstein</surname>
            ,
            <given-names>L. G.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>D. S.</given-names>
          </string-name>
          <article-title>Hidden in plain sight: reconsidering the use of race correction in clinical algorithms</article-title>
          .
          <source>N. Engl. J. Med</source>
          .
          <volume>383</volume>
          ,
          <fpage>874</fpage>
          -
          <lpage>882</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>O.</given-names>
            <surname>Basystiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Melnykova</surname>
          </string-name>
          ,
          <article-title>Multimodal Approaches for Natural Language Processing in Medical Data, IDDM 2022 Informatics &amp; Data-Driven Medicine</article-title>
          , pp.
          <fpage>246</fpage>
          -
          <lpage>252</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Rybchak</surname>
          </string-name>
          , et. al.,
          <article-title>Analysis of computer vision and image analysis technics, ECONTECHMOD: an international quarterly journal on economics of technology and modelling processes</article-title>
          , Lublin, Poland,
          <year>2017</year>
          , pp.
          <fpage>79</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Havryliuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Dumyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vovk</surname>
          </string-name>
          .
          <article-title>Extraction of Structural Elements of the Text Using Pragmatic Features for the Nomenclature of Cases Verification</article-title>
          . In: Hu,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>He</surname>
          </string-name>
          , M. (eds) Advances
          <source>in Intelligent Systems, Computer Science and Digital Economics IV. CSDEIS 2022. Lecture Notes on Data Engineering and Communications Technologies</source>
          ,
          <year>2023</year>
          , vol
          <volume>158</volume>
          . Springer, Cham. https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -24475-9_
          <fpage>57</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Rajpurkar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Banerjee</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Topol</surname>
            ,
            <given-names>E. J.</given-names>
          </string-name>
          <article-title>AI in health and medicine</article-title>
          .
          <source>Nat. Med</source>
          .
          <volume>28</volume>
          ,
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Esteva</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          et al.
          <article-title>Deep learning-enabled medical computer vision</article-title>
          .
          <source>NPJ Digit. Med</source>
          .
          <volume>4</volume>
          ,
          <issue>5</issue>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.N.</given-names>
            ,
            <surname>Falcone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.J.</given-names>
            ,
            <surname>Rajpurkar</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          et al.
          <source>Multimodal biomedical AI</source>
          .
          <source>Nat Med 28</source>
          ,
          <fpage>1773</fpage>
          -
          <lpage>1784</lpage>
          (
          <year>2022</year>
          ). https://doi.org/10.1038/s41591-022-01981-2
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Nataliya</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          , et. al.
          <article-title>"Big Data analysis in development of personalized medical system"</article-title>
          ,
          <source>The 10th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN)</source>
          ,
          <volume>160</volume>
          ,
          <fpage>229</fpage>
          -
          <lpage>234</lpage>
          . (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Yaroslav</surname>
            <given-names>Tolstyak</given-names>
          </string-name>
          ,
          <article-title>Myroslav Havryliuk "An Assessment of the Transplant's Survival Level for Recipients after Kidney Transplantations using Cox Proportional-Hazards Model"</article-title>
          ,
          <source>Proceedings of the 5th International Conference on Informatics &amp;</source>
          <string-name>
            <surname>Data-Driven</surname>
            <given-names>Medicine</given-names>
          </string-name>
          , Lyon, France, November
          <volume>18</volume>
          - 20, CEUR-WS.org,
          <year>2022</year>
          . pp.
          <fpage>260</fpage>
          -
          <lpage>265</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Kang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ko</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Mersha</surname>
          </string-name>
          , T. B.
          <article-title>A roadmap for multi-omics data integration using deep learning</article-title>
          .
          <source>Brief. Bioinform</source>
          .
          <volume>23</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>