<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Speaker: Jiaqing Liu (Ritsumeikan University, Japan)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Title: Multimodal Deep Learning in Healthcare</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Biography: 2016, and the M.E. and D.E. degrees from Ritsumeikan University</institution>
          ,
          <addr-line>Kyoto, Japan, in 2018 and 2021, respectively. From 2020 to 2021</addr-line>
          ,
          <institution>he was a JSPS Research Fellowship for Young Science.</institution>
          <addr-line>From</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Deep learning has been successfully applied in many research fields, such as computer vision, speech recognition and natural language processing. Most of them are focused on single modality. On the other hand, multimodal information is more useful for practical applications. Multimodal deep learning has got a lot of attention and becomes an important issue in the field of artificial intelligence. Compared with traditional single-modal deep learning, there are following challenges in multimodal deep learning: development of multimodal dataset; multimodal representation; multimodal alignment; multimodal translation and multimodal co-learning. The propose of this talk is to introduce efficient and accurate multimodal deep learning methods and apply them to depression estimation.</p>
      </abstract>
    </article-meta>
  </front>
  <body />
  <back>
    <ref-list />
  </back>
</article>