<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Inductive Future Time Prediction on Temporal Knowledge Graphs with Interval Time</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roxana Pop</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Egor V. Kostylev</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>Temporal Knowledge Graphs (TKGs) are an extension of Knowledge Graphs where facts are temporally scoped. They have recently received increasing attention in knowledge management, mirroring an increased interest in temporal graph learning within the graph learning community. While there have been many systems proposed for TKG learning, there are many settings to be considered, and not all of them are yet fully explored. In this position paper we identify a problem not yet approached, inductive future time prediction on interval-based TKGs, and formalise it as a machine learning task. We then outline several promising approaches for solving it, focusing on a neurosymbolic framework connecting TKG learning with the temporal reasoning formalism DatalogMTL.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Temporal Knowledge Graphs</kwd>
        <kwd>Time prediction</kwd>
        <kwd>Time intervals</kwd>
        <kwd>Inductive KG completion</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        dynamic link prediction and time prediction [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Dynamic link prediction answers the question
‘What?’—that is, fills ‘ ?’ in incomplete temporal facts as (?, Visits, Canada)@2009—while time
prediction answers ‘When?’—that is, fills ‘ ?’ in, for example, (Obama, Visits, Canada)@?. The
time prediction task is the less researched one, though arguably more challenging; moreover,
systems developed for time prediction can usually also address the dynamic link prediction (see
Section 2 for an overview) .
      </p>
      <p>
        There are several settings in which both the dynamic link prediction and time prediction tasks
can be addressed as ML tasks, specified by the way in which the training and validation/test
data relate to each other. The interpolation/extrapolation distinction [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is made regarding time
scopes: if an ML model is restricted to the time points or intervals seen while training, it works
under interpolation, but if it can adapt to unseen times (e.g., future ones, relevant for forecasting),
it works under extrapolation. The transductive/inductive distinction [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], borrowed from the
static graph learning literature [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], is similar in spirit but concerns how the ML model deals with
unseen entities: if it can adapt to unseen entities it is inductive, and otherwise it is transductive.
      </p>
      <p>
        In short, interval-based TKGs generalize point-based TKGs, time prediction is more
challenging than dynamic link prediction, and the extrapolation and inductive settings are more
general than the interpolation and transductive ones. This motivates us to introduce and study
the ML task of inductive future time prediction on interval-based TKGs (ITKGs). We currenty
develop neural architectures for this problem, as well as explore connections of them to a
recent symbolic temporal reasoning language, DatalogMTL [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This position paper outlines
our current progress towards the design and evaluation of this neurosymbolic approach.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>There are many systems developed for ML tasks on TKGs, though, as we will highlight in the
following, few of these systems consider ITKGs, few of them approach the time prediction task
and few of them work in the inductive setting—with no overlap that we are aware of.</p>
      <p>
        The existing literature focuses predominantly on point-based TKGs [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref6 ref7">10, 11, 12, 13, 14, 15, 7, 16,
17, 18, 6</xref>
        ], though some works consider interval-based TKGs [
        <xref ref-type="bibr" rid="ref3">3, 19, 20, 21</xref>
        ]. As for the timeline
type, there are some works viewing TKGs as snapshots of static graphs sampled at equidistant
time points, most notably RE-GCN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and RE-NET [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], thus working with a discrete timeline.
Yet, there are various works, both specifically for TKGs [
        <xref ref-type="bibr" rid="ref10 ref11 ref3 ref6">11, 10, 3, 19, 18, 6</xref>
        ], and in the larger
temporal graph learning community [
        <xref ref-type="bibr" rid="ref4">4, 22, 23</xref>
        ] which focus on continuous time.
      </p>
      <p>
        Most of the existing TKG learning systems address the dynamic link prediction task [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref14 ref15 ref7">24,
11, 12, 13, 14, 15, 25, 26, 27, 28, 7, 18, 20</xref>
        ], and only a few approach also time prediction [
        <xref ref-type="bibr" rid="ref10 ref3 ref6">10, 3,
19, 16, 21, 29, 6</xref>
        ], of which some are limited to time points [
        <xref ref-type="bibr" rid="ref10 ref6">10, 16, 6</xref>
        ], while others can predict
intervals [
        <xref ref-type="bibr" rid="ref3">3, 19, 29</xref>
        ]. Some time prediction methods, such as those employed by EvoKG [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
GHNN [16] and Know-Evolve [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] for TKGs, and DyRep [22] for temporal networks, are based
on Temporal Point Processes, while the more recent systems that can predict time intervals,
such as TIMEPLEX [19] and TIME2BOX [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], use the greedy coalescing method [19].
      </p>
      <p>
        As for the settings, there are some works focusing on interpolation [
        <xref ref-type="bibr" rid="ref3">30, 31, 3, 18, 29</xref>
        ], though
most systems target extrapolation [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref7">32, 10, 11, 33, 12, 13, 14, 15, 25, 7, 16, 17</xref>
        ]. Yet, there are not
many inductive TKG systems, and their approaches are varied: TLogic [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] is based on temporal
graphs, FILT [34] on concept-aware mining, and TANGO [25] on neural ODEs [35]. If we look
at the broader static and temporal graph learning areas, inductive capabilities are often achieved
by using architectures based on Graph Neural Networks (GNNs) [
        <xref ref-type="bibr" rid="ref8">22, 23, 36, 37, 8</xref>
        ].
      </p>
      <p>
        Most of the aftermentioned methods are neural in nature, with the notable exception of
TLogic [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which mines temporal logical rules. Yet, the rules in TLogic are limited to time
points. On the symbolic side, there exist temporal logics that can deal with time intervals, such
as DatalogMTL [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]—a recently introduced formalism extending Datalog [38] to the temporal
dimension. Datalog is a rule-based logical language which can be used for static KG reasoning
and which has been utilised in neurosymbolic methods in KG learning [37]. While the
connections of DatalogMTL and ITKG learning have not yet been explored, a DatalogMTL program
can generate new temporal facts via reasoning and could hence be seen as a predictor on ITKG
data. This predictor could be used for both dynamic link prediction and time prediction, could
work in an inductive setting (similar to Datalog for static KGs [37]), and could be restricted to
only generate facts with future temporal annotations — working in the extrapolation setting.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem formalisation</title>
      <p>In this section, we formalise the problem that we study, starting from basic notions such as
temporal knowledge graphs and concluding with its cast as an ML task.</p>
      <p>Let  and ℛ be finite sets of types and relations, respectively, collectively called predicates ,
and let ℰ be an infinite set of entities, also known as constants. Let T be a timeline—that is, a set
of timepoints; in our context, it is either integers Z or rationals Q. We are interested in intervals
over T, and concentrate on the set IntT of non-empty closed intervals [1, 2] ⊂ T with 1 ≤ 2.
An interval of the form [1, 1] is punctual, and we may write it just 1.</p>
      <p>A fact is a triple of the form (, type,  ), where  ∈ ℰ and  ∈  , or of the form (1, , 2),
where 1, 2 ∈ ℰ and  ∈ ℛ. Then, a temporal fact is  @ , where  is a fact and  ∈ IntT.
Definition 1. An interval-based temporal knowledge graph ( ITKG) over T is a set of facts
(which we call atemporal in this context) and temporal facts. An ITKG is a point-based temporal
knowledge graph ( PTKG) if all the intervals in its temporal facts are punctual.</p>
      <p>For an ITKG , let Pred() and Const() denote the predicates and entities appearing in ,
respectively, and let Sig() = Pred() ∪ Const().</p>
      <p>Intuitively, an atemporal fact in an ITKG represents something that holds all the time, so it is
redundant to have a temporal version of this triple in the same ITKG; moreover, overlaps of
intervals for the same triple are also redundant. This motivates the following notion: an ITKG
 is in normal form if there is no  @ in  with  in  (as an atemporal triple), and there
are no  @ 1 and  @ 2 in  with  1 ∩  2 ̸= ∅. It is straightforward to reduce an ITKG to an
ITKG in normal form in a unique way, and the resulting ITKG is semantically equivalent to the
original one. So, in the rest of this paper, we silently concentrate on normal ITKGs.</p>
      <p>Every time point  ∈ T limits the past subgraph ≤  of an ITKG  over T that contains
• every atemporal fact  in ;
• every fact  @[1, ′2] with ′2 = min(2, ) for a fact  @[1, 2] ∈ .</p>
      <p>Intuitively, future time prediction on ITKGs is the problem of predicting future temporal
facts of an ITKG  on the base of its past counterpart ≤ . To formalise this problem as an ML
task, we assume that every ITKG ≤  with  the maximal time point in an interval of ≤  has
the (most probable) temporal completion  with Sig() = Sig(≤ ) such that ≤  is the past
graph of  limited by . In the following definition we will concentrate on time prediction—that
is, on predicting the nearest to  maximal future interval for a given tuple or the absence of
such an interval. We also consider the general inductive prediction—that is, the setting where
the prediction function applies to any ITKG over the given predicates , while the entities may
be arbitrary. In particular, an inductive ML model trained on ITKGs with one set of entities
should be applicable to ITKGs with any other entities.</p>
      <p>Definition 2. The inductive next interval function next-int(≤ ,  ) maps an ITKG ≤  over T
with Pred(≤ ) ⊆  and temporal completion , and a triple  over Sig(≤ ) to the smallest
interval [1, 2] such that 1 ≥ , 2 &gt; , and  @[1, 2] ∈ , if such an interval exists, and to
a special symbol ∅ otherwise; here, an interval [1, 2] is smaller than another interval [′1, ′2] if
1 &lt; ′1 (note that, due to normalisation, we need not compare overlapping intervals).</p>
      <p>Thus, the ML task of inductive future time prediction on ITKGs for the time domain T is to
learn (in a supervised way) the next interval function next-int.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed approaches</title>
      <p>The main approach we would like to investigate is neurosymbolic in nature. We would like to
develop a framework in which we train a neural architecture for time interval prediction and
then extract a temporal logical program from the trained model that can generate the future
time intervals through the means of temporal reasoning. As baselines we will use purely neural
methods to make sure the neurosymbolic method has at least comparable empirical results.</p>
      <sec id="sec-4-1">
        <title>4.1. Neurosymbolic architecture</title>
        <p>
          Monotonic GNNs (MGNNs) [37] are a class of GNNs introduced for KG completion, which
generate the same facts on an input KG as the application of a set of Datalog [38] rules. Moreover,
for each trained MGNN model, the equivalent Datalog rules can be automatically extracted
[37], resulting in a neurosymbolic architecture that allows for a smooth switch between the two
paradigms. We are currently generalising this architecture to ITKGs, moving from Datalog to
its temporal counterpart, DatalogMTL. One of the key insights of the MGNN-based (static) KG
completion system is to encode the original graph into a diferent graph in which each (potential)
edge becomes a node, and the existence of a certain type or relation is given by a feature attached
to such a node. We exemplified in Figure 1 how this encoding could be expanded to ITKGs (with
some technical details omitted for simplicity). The nodes of the encoding are pairs of constants
in the original graph, edges link nodes that share constants, and the node features are indexed
by types and relations (which are Human, IsPresidentOf, Visits, IsPresidentOf − 1, Visits− 1 in our
example). However, while in the static case [37] the features indicate through Booleans the
truth values of types and relations (e.g. [
          <xref ref-type="bibr" rid="ref1">0, 0, 0, 0, 1</xref>
          ] for (Canada, Obama)), in our case they
contain the time intervals where the facts are true. In case of multiple time intervals we have
multiple node features; see features for (Canada, Obama). How and if MGNNs or other GNNs
can be modified to work in the temporal case is something we are currently researching.
        </p>
        <p>Obama, Obama</p>
        <p>T ∅ ∅ ∅ ∅
∅ ∅ ∅ ∅ [2009, 2009]
∅ ∅ ∅ ∅ [2016, 2016]</p>
        <p>Canada, Obama
∅ [2009, 2017] ∅ ∅ ∅</p>
        <p>T ∅ ∅ ∅ ∅
Biden, Biden</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Benchmarks, baselines, and metrics</title>
        <p>
          Existing works for time prediction on ITKGs [
          <xref ref-type="bibr" rid="ref3">19, 3</xref>
          ] evaluate time prediction performance on
the YAGO11k [29], Wikidata12k [29], and Wikidata114K [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] datasets. We will investigate if
these datasets can be turned into inductive benchmarks, as well as design new benchmarks
from other relevant datasets.
        </p>
        <p>
          Regarding baselines, we believe that GraphMixer [39], a recent system based on the
MLPMixer architecture [40], is a good candidate due to its simplicity, and we plan to adapt it to
time prediction on ITKGs. We will also investigate GNN-based architectures with inductive
and continuous time capabilities such as DyRep [22], TGN [23], and EvoKg [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Some of these
architectures have time prediction capabilities, but they are limited to time points. For the
architectures where time interval prediction is not achievable through simple modifications,
we will employ the greedy coalescing method [19]. With regards to evaluation metrics, two
have been proposed for the interval time prediction task: aeIOU [19] and gaeIOU [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], of which
gaeIOU has more desirebale properties [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and it is the one we will therefore concentrate on.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future work</title>
      <p>In this paper we highlighted the more general views on TKGs (continuous and interval-based),
the diferent ML-based tasks approached in the literature (dynamic link and time prediction), and
the more general ML settings (extrapolative and inductive). We then formalised the future time
prediction task on interval-based TKGs, and proposed to extend a neurosymbolic framework
from the static KG case to approach this task, as well as provided a way of extending the graph
encoding from the static case. Our next steps are to adapt GNN-based architectures to work on
the encoded graph and explore DatalogMTL programs extraction from the trained models.
[16] Z. Han, Y. Wang, Y. Ma, S. Günnemann, V. Tresp, Graph hawkes neural network for
future prediction on temporal knowledge graphs, in: The Automated Knowledge Base
Construction (AKBC), 2020.
[17] Z. Han, P. Chen, Y. Ma, V. Tresp, Explainable subgraph reasoning for forecasting on
temporal knowledge graphs, in: The International Conference on Learning Representations
(ICLR), 2021.
[18] R. Goel, S. M. Kazemi, M. Brubaker, P. Poupart, Diachronic embedding for temporal
knowledge graph completion, in: The AAAI Conference on Artificial Intelligence (AAAI),
2020, pp. 3988–3995.
[19] P. Jain, S. Rathi, Mausam, S. Chakrabarti, Temporal Knowledge Base completion: New
algorithms and evaluation protocols, in: The Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2020, pp. 3733–3747.
[20] A. García-Durán, S. Dumančić, M. Niepert, Learning sequence encoders for temporal
knowledge graph completion, in: The Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2018, pp. 4816–4821.
[21] J. Leblay, M. W. Chekol, Deriving validity time in knowledge graph, in: The Web</p>
      <p>Conference (WWW), 2018, pp. 1771–1776.
[22] R. S. Trivedi, M. Farajtabar, P. Biswal, H. Zha, DyRep: Learning Representations over
Dynamic Graphs, in: The International Conference on Learning Representations (ICLR),
2019.
[23] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, M. Bronstein, Temporal graph
networks for deep learning on dynamic graphs, in: The ICML Workshop on Graph
Representation Learning (GRL@ICML), 2020.
[24] P. Shao, D. Zhang, G. Yang, J. Tao, F. Che, T. Liu, Tucker decomposition-based temporal
knowledge graph completion, Knowledge-Based Systems 238 (2022).
[25] Z. Han, Z. Ding, Y. Ma, Y. Gu, V. Tresp, Learning neural ordinary equations for forecasting
future links on temporal knowledge graphs, in: The Conference on Empirical Methods in
Natural Language Processing (EMNLP), 2021, pp. 8352–8364.
[26] J. Wu, M. Cao, J. C. K. Cheung, W. L. Hamilton, TeMP: Temporal message passing for
temporal knowledge graph completion, in: The Conference on Empirical Methods in
Natural Language Processing (EMNLP), 2020, pp. 5730–5746.
[27] T. Lacroix, G. Obozinski, N. Usunier, Tensor decompositions for temporal knowledge base
completion, in: The International Conference on Learning Representations (ICLR), 2020.
[28] J. Jung, J. Jung, U. Kang, Learning to walk across time for temporal knowledge graph
completion, in: The Conference on Knowledge Discovery and Data Mining (SIGKDD),
2021, p. 786–795.
[29] S. S. Dasgupta, S. N. Ray, P. Talukdar, HyTE: Hyperplane-based temporally aware
knowledge graph embedding, in: The Conference on Empirical Methods in Natural Language
Processing (EMNLP), 2018, pp. 2001–2011.
[30] Y.-C. Lee, J. Lee, D. Lee, S.-W. Kim, THOR: Self-supervised temporal knowledge graph
embedding via three-tower graph convolutional networks, in: The International Conference
on Data Mining (ICDM), 2022, pp. 1035–1040.
[31] A. Sadeghian, M. Armandpour, A. Colas, D. Z. Wang, ChronoR: Rotation based temporal
knowledge graph embedding, in: The AAAI Conference on Artificial Intelligence (AAAI),
2021, pp. 6471–6479.
[32] S. Wang, X. Cai, Y. Zhang, X. Yuan, CRNet: Modeling concurrent events over temporal
knowledge graph, in: The International Semantic Web Conference (ISWC), 2022, pp.
516–533.
[33] Z. Li, S. Guan, X. Jin, W. Peng, Y. Lyu, Y. Zhu, L. Bai, W. Li, J. Guo, X. Cheng, Complex
evolutional pattern learning for temporal knowledge graph reasoning, in: The Annual
Meeting of the Association for Computational Linguistics (ACL), 2022, pp. 290–296.
[34] Z. Ding, J. Wu, B. He, Y. Ma, Z. Han, V. Tresp, Few-shot inductive learning on temporal
knowledge graphs using concept-aware information, in: The Conference on Automated
Knowledge Base Construction (AKBC), 2022.
[35] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, D. K. Duvenaud, Neural Ordinary
Diferential Equations, in: The Advances in Neural Information Processing Systems (NeurIPS),
volume 31, Curran Associates, Inc., 2018.
[36] S. Liu, B. Cuenca Grau, I. Horrocks, E. V. Kostylev, INDIGO: GNN-based inductive
knowledge graph completion using pair-wise encoding, in: The Advances in Neural Information
Processing Systems (NeurIPS), 2021, pp. 2034–2045.
[37] D. J. Tena Cucala, B. Cuenca Grau, E. V. Kostylev, B. Motik, Explainable GNN-based models
over knowledge graphs, in: The International Conference on Learning Representations
(ICLR), 2022.
[38] S. Abiteboul, R. Hull, V. Vianu, Foundations of Databases, Addison-Wesley, 1995.
[39] W. Cong, S. Zhang, J. Kang, B. Yuan, H. Wu, X. Zhou, H. Tong, M. Mahdavi, Do we
really need complicated model architectures for temporal networks?, in: The International
Conference on Learning Representations (ICLR), 2023.
[40] I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung,
A. Steiner, D. Keysers, J. Uszkoreit, M. Lucic, A. Dosovitskiy, MLP-Mixer: An all-MLP
Architecture for Vision, in: The Advances in Neural Information Processing Systems
(NeurIPS), 2021, pp. 24261–24272.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          , E. Blomqvist,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cochez</surname>
          </string-name>
          , C. d'Amato, G. de Melo,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kirrane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E. L.</given-names>
            <surname>Gayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Neumaier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Ngomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Rashid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schmelzeisen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Sequeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Staab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <article-title>Knowledge graphs</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>54</volume>
          (
          <year>2022</year>
          )
          <volume>71</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>71</lpage>
          :
          <fpage>37</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Manola</surname>
          </string-name>
          , E. Miller, RDF Primer,
          <source>W3C Recommendation</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Janowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Mai, Time in a box: Advancing knowledge graph completion with temporal scopes</article-title>
          , in: The Conference on Knowledge Capture
          <string-name>
            <surname>Conference (K-CAP)</surname>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>121</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Souza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mesquita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kaski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <article-title>Provably expressive temporal graph networks</article-title>
          ,
          <source>in: The Advances in Neural Information Processing Systems (NeurIPS)</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Kazemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kobyzev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sethi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Forsyth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Poupart</surname>
          </string-name>
          ,
          <article-title>Representation learning for dynamic graphs: A survey</article-title>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Trivedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Song</surname>
          </string-name>
          , Know-Evolve:
          <article-title>Deep temporal reasoning for dynamic knowledge graphs</article-title>
          ,
          <source>in: The International Conference on Machine Learning (ICML)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3462</fpage>
          -
          <lpage>3471</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <article-title>Recurrent event network: Autoregressive structure inferenceover temporal knowledge graphs</article-title>
          ,
          <source>in: The Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>6669</fpage>
          -
          <lpage>6683</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ying</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <article-title>Inductive representation learning on large graphs</article-title>
          ,
          <source>in: The Advances in Neural Information Processing Systems (NeurIPS)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. G.</given-names>
            <surname>Kalaycı</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ryzhikov</surname>
          </string-name>
          , G. Xiao,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zakharyaschev</surname>
          </string-name>
          ,
          <article-title>Querying log data with metric temporal logic</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          .
          <volume>62</volume>
          (
          <year>2018</year>
          )
          <fpage>829</fpage>
          -
          <lpage>877</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cristofor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Faloutsos</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Dong,</surname>
          </string-name>
          <article-title>EvoKG: Jointly modeling event time and network structure for reasoning over temporal knowledge graphs</article-title>
          ,
          <source>in: The ACM International Conference on Web Search and Data Mining (WSDM)</source>
          ,
          <year>2022</year>
          , p.
          <fpage>794</fpage>
          -
          <lpage>803</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , M. Hildebrandt,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joblin</surname>
          </string-name>
          , V. Tresp,
          <article-title>TLogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs</article-title>
          ,
          <source>in: The AAAI Conference on Artificial Intelligence (AAAI)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>4120</fpage>
          -
          <lpage>4127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fan</surname>
          </string-name>
          , G. Cheng, Y. Zhang, Learning from history:
          <article-title>Modeling temporal knowledge graphs with sequential copy-generation networks</article-title>
          ,
          <source>in: The AAAI Conference on Artificial Intelligence (AAAI)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4732</fpage>
          -
          <lpage>4740</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , Z. Han,
          <string-name>
            <surname>K</surname>
          </string-name>
          . He,
          <article-title>TimeTraveler: Reinforcement learning for temporal knowledge graph forecasting</article-title>
          ,
          <source>in: The Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>8306</fpage>
          -
          <lpage>8319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , X. Cheng,
          <article-title>Temporal knowledge graph reasoning based on evolutional representation learning</article-title>
          ,
          <source>in: The International Conference on Research and Development in Information Retrieval (SIGIR)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>408</fpage>
          -
          <lpage>417</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , X. Cheng,
          <article-title>Search from history and reason for future: Two-stage reasoning on temporal knowledge graphs, in: The Annual Meeting of the Association for Computational Linguistics (ACL)</article-title>
          and
          <source>the International Joint Conference on Natural Language Processing (ICNLP)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4732</fpage>
          -
          <lpage>4743</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>