=Paper= {{Paper |id=Vol-2786/Paper1 |storemode=property |title=Ontology based Machine Learning in Semantic Audio Applications - Abstract |pdfUrl=https://ceur-ws.org/Vol-2786/Paper1.pdf |volume=Vol-2786 |authors=George Fazekas |dblpUrl=https://dblp.org/rec/conf/isic2/Fazekas21 }} ==Ontology based Machine Learning in Semantic Audio Applications - Abstract== https://ceur-ws.org/Vol-2786/Paper1.pdf
                                                                                                                                               14


Ontology based Machine Learning in Seman c Audio Applica ons
George Fazekasa

a
    Queen Mary University of London, London




                     Abstract: Semantic Audio aims to associate audio and music content with meaningful labels
                     and descriptions. It is an emerging technological and research field in the confluence of signal
                     processing, machine learning, including deep learning, and formal knowledge representation.
                     Semantic Audio can facilitate the detection of acoustic events in complex environments, the
                     recognition of beat, tempo, chords or keys in music recordings or the creation of smart
                     ecosystems and environments, for instance, to enhance audience and performer interaction.
                     Semantic Audio can bring together creators, distributors and consumers in the music value
                     chain in intuitive new ways. Ontologies play a crucial role in enabling complex Semantic
                     Audio applications by providing shared conceptual models that enable combining different
                     data sources and heterogeneous services using Semantic Web technologies. The benefit of
                     using these techniques have been demonstrated in several large projects recently, including
                     Audio Commons, an ecosystem built around Creative Commons audio content. In this talk, I
                     will first outline fundamental principles in Semantic Audio analysis and introduce important
                     concepts in representing audio and music data. Specific demonstrators will be discussed in
                     the areas of smart audio content ecosystems, music recommendation, intelligent audio
                     production and the application of IoT principles in musical interaction. I will discuss how
                     machine learning and the use of ontologies in tandem benefit specific applications, and talk
                     about challenges in fusing audio and semantic technologies as well as the opportunities they
                     call forth.


      1. Short Biography
   Dr George Fazekas is a Senior Lecturer                                                  Electrical Engineering. He is an investigator of
(Associate Prof.) in Digital Media at the Centre for                                       UKRI's £6.5M Centre for Doctoral Training in
Digital Music, Queen Mary, University of London                                            Artificial Intelligence and Music (AIM CDT). He
(QMUL). He holds a BSc, MSc and PhD degree in                                              published over 140 academic papers in the fields of
______________________________                                                             Music Information Retrieval, Semantic Web,
ISIC’21:International Semantic Intelligence Conference, February                           Ontologies, Deep Learning and Semantic Audio,
25–27, 2021, New Delhi, India
✉ : g.fazekas@qmul.ac.uk (G.Fazekas)
                                                                                           including an award winning paper on transfer
                                                                                           ______________________________
               Copyright © 2021 for this paper by the authors. Use permitted               For more details on recent works
               under Creative Commons License Attribution 4.0 International (CC BY 4.0).
               CEUR Workshop Proceedings (CEUR-WS.org)                                     see http://eecs.qmul.ac.uk/~gyorgyf/research.html
                                                                                                     15


learning. Fazekas has participated in research and      mood-based music recommendation systems in the
knowledge transfer projects as researcher,              nationally funded Making Musical Mood Metadata
developer and at management level. He was               project. He was general chair of ACM’s Audio
QMUL's Principal Investigator on the H2020 Audio        Mostly 2017 and papers co-chair and committee
Commons project (grant no. 688382, EUR 2.9M,            leader of the AES 53rd International Conference on
2016-2019) which received best score by expert          Semantic Audio. He is a regular reviewer for IEEE
reviewers of the European Commission, and Co-I          Transactions, JNMR and others. He is a member
of additional research projects and industrial grants   oforganising the IEEE, ACM, BCS and AES and
worth over £410K, including the JISC funded             received the Citation Award of the AES for his
Shared Open Vocabularies for Audio Research and         work on the Semantic Audio Analysis Technical
Retrieval. He worked with BBC R&D to create             Committee.