=Paper= {{Paper |id=Vol-2997/xpreface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2997/preface.pdf |volume=Vol-2997 }} ==None== https://ceur-ws.org/Vol-2997/preface.pdf
Preface ( CSSA)
Since the beginning of the 2000s, there has been an increasing number of studies
and standards proposed for generating large scale symbolic representations of
knowledge (known as Knowledge Graphs (KGs)) out of heterogeneous resources
such as text, images, etc. Moreover, there have been many advances in symbolic
reasoning, as well as their applications to various fields. Recently, sub-symbolic
methods have gained momentum. These methods aim at generating distributed
representations from several resources such as text or symbolic representations
(Graph Neural Networks, KG embeddings, etc.). These sub-symbolic methods for
symbolic representations mainly focus on the task of KG completion. However,
they have also recently been used for various tasks, e.g., in Natural Language
Processing (NLP). The future perspective for these methods would be a
combination of these approaches, leading to a form of neurosymbolic reasoning.
Advances in the real world applications related to these methods will also serve
as a stepping stone in the proving their practicality.



Overview (KGRL)
Knowledge Graphs are becoming the standard for storing, retrieving, and
querying structured data. In academia and industry, they are increasingly used to
provide background knowledge. Over the last years, several research
contributions were made which show that machine learning, especially
representation learning, can be successfully applied to knowledge graphs
enabling inductive inference about facts with unknown truth values.
Brief Introduction

Several of these approaches encode the graph structure that can be used for tasks
such as link prediction, node classification, entity resolution, recommendation,
dialogue systems, and many more. Although proposed graph representations can
capture the complex relational patterns over multiple hops, they are still
insufficient to solve more complex tasks such as relational reasoning .For this kind
of tasks, we envision a need for representations with more expressive power,
which could include representation in non-Euclidean space. This starts by
capturing e.g., type constrained, transitive or hierarchical relations in an
embedding up to learning expressive knowledge representations languages like
first-order logic rules.

Furthermore, most approaches for learning representations for knowledge
graphs focus on transductive settings, i.e., all entities and relations need to be
seen during training, not allowing predictions for unseen elements. For evolving
graphs, approaches are required that generalize to unseen entities and relations.
One avenue of research to address inductiveness is to employ multimodal
approaches that compensate for missing modalities, and recently meta-learning
approaches have successfully been applied

Lately, the generalization of deep neural network models to non-Euclidean
domains such as graphs and manifolds is explored They study the fundamental
aspects that influence the underlying geometry of structured data for building
graph representations Recent advances in graph representation learning led to
novel approaches such as convolutional neural networks for graphs attention-
based graph networketc. Most graphs here are either undirected or directed with
both discrete and continuous node and edge attributes representing types of
spatial or spectral data.

In this workshop, we want to see novel representation learning methods,
approaches that can be applied to inductive learning and to (logical) reasoning
and works that shed insights into the expressive power, interpretability, and
generalization of graph representation learning methods.
Also, we want to bring together researchers from different disciplines but united
by their adoption of earlier mentioned techniques from machine learning.