=Paper= {{Paper |id=Vol-3198/invited2 |storemode=property |title=Multimodal Deep Learning in Healthcare |pdfUrl=https://ceur-ws.org/Vol-3198/invited2.pdf |volume=Vol-3198 |authors=Jiaqing Liu |dblpUrl=https://dblp.org/rec/conf/atait/Liu22 }} ==Multimodal Deep Learning in Healthcare== https://ceur-ws.org/Vol-3198/invited2.pdf
Speaker: Jiaqing Liu (Ritsumeikan University, Japan)

Biography:
Jiaqing Liu received the B.E. degree from Northeastern University, Shenyang, China, in
2016, and the M.E. and D.E. degrees from Ritsumeikan University, Kyoto, Japan, in 2018
and 2021, respectively. From 2020 to 2021, he was a JSPS Research Fellowship for Young
Science. From October 2021 to March 2022, he was a Specially Appointed Assistant
Professor with the Department of Intelligent Media, ISIR, Osaka University, Osaka, Japan.
He is currently an Assistant Professor with the College of Information Science and
Engineering, Ritsumeikan University. His research interests include pattern recognition,
image processing, and machine learning.


Title:
Multimodal Deep Learning in Healthcare

Abstract:
Deep learning has been successfully applied in many research fields, such as computer
vision, speech recognition and natural language processing. Most of them are focused on
single modality. On the other hand, multimodal information is more useful for practical
applications. Multimodal deep learning has got a lot of attention and becomes an important
issue in the field of artificial intelligence. Compared with traditional single-modal deep
learning, there are following challenges in multimodal deep learning: development of
multimodal dataset; multimodal representation; multimodal alignment; multimodal
translation and multimodal co-learning. The propose of this talk is to introduce efficient and
accurate multimodal deep learning methods and apply them to depression estimation.