<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Organiser Team at ImageCLEFlifelog 2020: A Baseline Approach for Moment Retrieval and Athlete Performance Prediction using Lifelog Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tu-Khiem Le</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Van-Tu Ninh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liting Zhou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Minh-Huy Nguyen-Ngoc</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Huu-Duc Trinh</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nguyen-Hien Tran</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Piras</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Riegler</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pal Halvorsen</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mathias Lux</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Minh-Triet Tran</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Graham Healy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cathal Gurrin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Duc-Tien Dang-Nguyen</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dublin City University</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Klagenfurt University</institution>
          ,
          <addr-line>Klagenfurt</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Pluribus One &amp; University of Cagliari</institution>
          ,
          <addr-line>Cagliari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Simula Research Laboratory</institution>
          ,
          <addr-line>Oslo</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Bergen</institution>
          ,
          <addr-line>Bergen</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Science</institution>
          ,
          <addr-line>VNU-HCM, Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>For the LMRT task at ImageCLEFlifelog 2020, LIFER 3.0, a new version of the LIFER system with improvements in the user interface and system a ordance, is used and evaluated via feedback from a user experiment. In addition, since both tasks share a common dataset, LIFER 3.0 borrows some features from the LifeSeeker system deployed for the Lifelog Search Challenge; which are free-text search, visual similarity search and elastic sequencing lter. For the SPLL task, we proposed a naive solution by capturing the rate of change in running speed and weight, then obtain the target changes for each subtask using average computation and linear regression model. The results presented in this paper can be used as comparative baselines for other participants in the ImageCLEFlifelog 2020 challenge.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Recent advances in low-cost sensing technologies have resulted in a rapid increase
in the volume of digital records (i.e. pictures, videos, audio clips) generated by
personal devices such as smartphones, cameras, or wearble devices. This has
resulted in a need for e cient management systems to organise and retrieve
information from such archives. As a result, many e orts have been made to put
together lifelog data and state-of-the-art methods to develop interactive search
engines to serve this purpose, which are evaluated via various benchmarking
challenges, namely NTCIR [11{13], LSC [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ] and ImageCLEFlifelog [4{6]
      </p>
      <p>
        In the 2020 edition of ImageCLEF2020 [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], the Lifelog Moments Retrieval
Task [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] (LMRT) of ImageCLEF2020lifelog challenge has utilised a bigger dataset
with 114 days of lifelog data, which is the same as the dataset used in the Lifelog
Search Challenge 2020 [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The ultimate goal of LMRT is to retrieve a number of
relevant moments which match a given query. Besides, another brand-new lifelog
task - Sport Lifelog Performance (SPLL) - was proposed with the aim of
predicting the expected performance of athletes after they trained for a sporting event.
The SPLL data was gathered from 16 di erent people who trained for a sporting
event for approximately six months. The data was collected using three di erent
approaches including wearable devices (Fitbit Tracker, Fitbit Versa) for
biometrics data recording (heart rate, calories, speed, pace, running distance, etc.),
Google Forms for self-reporting, and PMSYS for subjective wellbeing, injuries
and training load. The SPLL task was split into three small subtasks as follows:
1. Predict the change in running speed given by the change in seconds used
per km (kilometer speed) from the initial run to the run at the end of the
reporting period.
2. Predict the change in weight since the beginning of the reporting period to
the end of the reporting period in kilograms.
3. Predict the change in weight from the beginning of February to the end of
the reporting period in kilograms using the images.
      </p>
      <p>
        For the LMRT task, we inherited the design of LifeSeeker [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ] with
freetext search, external visual concepts detector and temporal exploration using
elastic sequencing. We introduce changes to the system's interface in order to
tackle the LMRT task and conducted a user study to gain more insights into the
performance of the search engine. Moreover, we give an overview of our search
engine and how the user study is set up and analyse the results on the LMRT
task. For the SPLL task, we provide basic approaches and baseline solutions to
predict the expected performance of the athletes, including the change in running
speed and weight from the recorded data of Fitbit device and food images only.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        MyLifeBits [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] was a pioneering system which enabled interation between end
users and lifelog data using a basic interactive retrieval mechanism. This
interation was then enhanced by the work of Doherty et al [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] which allows
human to create faceted queries, making it one of the rst multimodal interactive
lifelog retrieval systems. Due to the increasing attention on lifelogging, many
lifelog search engines have been developed, which escalates the need to have
a fair comparison among systems. Hence, the availability of the annual
challenges such as NTCIR Lifelog Task [11{13], Lifelog Search Challenge [
        <xref ref-type="bibr" rid="ref14 ref15">14,15</xref>
        ] and
ImageCLEF-lifelog [4{6, 24] have successfully facilitated the comparative
evaluation of retrieval systems while also supporting and facilitating reserachers to
make progress in a shared and collaborative environment.
      </p>
      <p>
        Considering speci cally the LMRT task of ImageCLEFlifelog 2019 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], nine
teams took part in the challenge with a wide variety of approaches to address
the problem in automatic manner and interactive manner. There was a general
trend that most of the teams extended the provided visual concept
annotations by utilising various concept dectectors [
        <xref ref-type="bibr" rid="ref19 ref25 ref26 ref28">19, 25, 26, 28</xref>
        ], which was believed
to enhance retrieval performance. For automatic runs, the retrieval approach
employed by most teams was similar, which was eliminating low quality images
and calculating similarity based on relevant scores [
        <xref ref-type="bibr" rid="ref26 ref30 ref8">8, 26, 30</xref>
        ]. In contrast, for
interactive systems, we observed that di erent variences of the Bag-of-Words
model were applied to the whole dataset to generating embeddings which served
as the backbone for the search engines [
        <xref ref-type="bibr" rid="ref19 ref25 ref7">7, 19, 25</xref>
        ]. Our system relied on the
provided metadata and also integrated additional visual concepts to solve this year
challenge. The design of LIFER 3.0 is presented in the following section.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Overview of LIFER 3.0- Baseline Interactive Search</title>
    </sec>
    <sec id="sec-4">
      <title>Engine for the Lifelog Moment Retrieval Task (LMRT)</title>
      <p>
        In this task, we introduce LIFER 3.0 as the baseline interactive retrieval
search engine, which is an improved version of the previous baseline systems
at ImageCLEFlifelog challenge [
        <xref ref-type="bibr" rid="ref25 ref29">25, 29</xref>
        ]. LIFER 3.0 inherited the advancements
made in LifeSeeker [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ] - interactive retrieval system at LSC, which were
recently implemented for the NTCIR14 Lifelog Task as its baseline system [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
Although the LMRT task and LSC challenge share the same dataset containing
114 days with 191,439 lifelogging images and corresponding metadata
(biometrics, location and GPS, human activity, visual concepts and annotations), the
ultimate goal of each task is di erent. LSC aims at retriving a single image that
perfectly ts the given narrative while LMRT expects the result to be a ranked
list of relevant images (moments), which match the description and cover a wide
range of moments. Therefore, by ultilising LifeSeeker for this task, we want to
evaluate the performance of this search engine in terms of relevant images
(precision) and moments coverage (recall).
      </p>
      <p>
        Our system as described in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] provides a free-text search mechanism to
ensure the simplicity for users (even novice users) to learn and use the search
engine. The underlying process includes parsing the input text-query into
various lexemes and mapping into di erent part-of-speech tags (POS tags) using a
natural language processor (nltk [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]). This enables us to perform term matching
between the query and index with a higher granularity to produce a ranked list
of target images.
      </p>
      <p>
        The index for the system is initialised with the similar approach using nltk
where each image is converted into a collection of terms organised into multiple
elds such as: time, location, visual concepts, etc. We further employed the
bottom up attention method [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which is pre-trained on Visual Genome [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
so as to better tag the images since Visual Genome comes with larger range
of object classes and object attributes. Besides, the image annotation is also
extended to include any text appearing within them, which was generated using
text recognition from CRAFT [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
      </p>
      <p>In order to adapt LifeSeeker to work for LMRT, we modi ed the interface of
the search engine to let a user quickly preview an image by hovering over it while
pressing the button x and select mutiple images for submission by right-clicking
them. The selected images are held locally as pinned items and can be viewed and
revised before submitting the results of the queries. The elastic sequecing lters
from LifeSeeker which display past and future images are simpli ed and merged
into one single lter to minimise the interaction e ort required and reduce the
searching time. Figure 1 illustrates the changes introduced in the interface of
LIFER 3.0.
4</p>
    </sec>
    <sec id="sec-5">
      <title>Sport Performance Lifelog Task: A Baseline Approach</title>
      <p>In total, we submitted two runs which are di erent only in subtask 1 as we
nominated two approaches for this subtask.</p>
      <p>Subtask 1: Predict the change in running speed given by the change in
seconds used per km (kilometer speed) from the initial run to the run at the
end of the reporting period.</p>
      <p>In this subtask, we exploit the exercise data recorded from Fitbit Tracker to
gain information about exercise activities, running distance, exercise duration
to infer the pace, which is the kilometer speed measure in seconds. Therefore,
we lter the list of exercise activities and keep the elds with the information of
distance and exercise duration to compute the pace.</p>
      <p>In run 1, at rst, we compute the change between pace of consecutive ltered
activities. Then, we split them into positive and negative changes to compute
average for each type of changes and nally sum them. In run 2, we only consider
running and treadmill training as they both involve running activities and follow
the procedure in run 1 to obtain the sum of positive and negative average changes
for the two activities separately. We then train a linear regression model to
predict the actual pace change from the pace of running and treadmill training
activities.</p>
      <p>Subtask 2: Predict the change in weight since the beginning of the reporting
period to the end of the reporting period in kilograms.</p>
      <p>In this subtask, we employ the self-reporting weight to calculate the change
between the start of logging period to its end. The approach is the same as in run
1 of subtask 1. We compute the di erence between the weight of consecutive rows
in self-reporting les. Then we divide the di erence based on its sign (positive
or negative) to calculate average for each type and nally sum them.</p>
      <p>Subtask 3: Predict the change in weight from the beginning of February to
the end of the reporting period in kilograms using the images.</p>
      <p>
        To predict the weight changes based on food images, we train a Convolutional
Neural Network using Inception V3 architecture [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] on the Food-101 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] dataset
to detect the kind of the food in the images. From the name of the food, we
search for the calories information of the food in the nutrition database 7 and
use it to estimate the weight gain after the athelete had meal.
5
5.1
      </p>
    </sec>
    <sec id="sec-6">
      <title>Experiment and Results</title>
      <p>LMRT Task
We carried out a user study using three participants (two novice users and one
expert user) for the search task, each accounts for one run in our submission. The
expert user is the author of the system while the novice users are the people who
has no prior to knowledge of lifelogging and the search engine. The participants
are given a brief introduction to lifelogging, lifelog data and an overview of the
functionalities of LIFER 3.0. We allowed the participants to freely explore the
search engine using the development queries as long as they wish. The experiment
started once they are ready and familiar with the search engine.</p>
      <p>
        In the experiment, all LMRT test queries are presented to the participants
with a time limit of ve minutes per query. However, there was no time limit for
reading a query's description and narrative, so the participants could spend as
long as they wish to understand the query before beginning the search process.
We provided no clari cation or guidance to the user during this user study. Once
they nished their search task, there was a follow-up questionaire to be lled to
get their opinion about LIFER 3.0. The list of questions are derived from the
User Experience Questionnaire - QEU [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
7 http://nutritionix.com
      </p>
      <p>Table 1 displays the result of our three runs where each run is generated by
one participant. Among the three runs, our system achieved a precision score
of 36% on Run 3, while getting 44% in terms of recall on Run 2 and reached
the result of 32% in overall F1-Score on Run 3. As can be seen from the table,
the expert user tends to perform better than novice users. Nonetheless, the gap
between the scores of novice users and expert user is not high, which indicates
that LIFER 3.0 might be generalised enough for any user to perform the search
task.</p>
      <p>Moreover, we further analysed the precision and recall scores across multiple
cut-o positions. As illustrated in Figure 2, LIFER 3.0 achieved a high precision
at top 5 images and dropped gradually. This means that users are able to select
correct images as soon as the results are presented to them. For recall score, we
observed a at curve in the submission of most participants, which implies that
the number of distinct moments the users select doesn't change regardless of the
cut-o positions. This happens due to the participants not trying to select all
relevant images nor look for other similar moments. Therefore, it is possible to
boost the overall scores if we further apply a post-processing and re-ranking to
append simliar target images to the list submitted by the users.
From the user questionnaire, we note that the pragmatic quality and hedonic
quality demonstrated some pro ciences in some criteria while indicating some
areas of improvement that we should work on in the future. In terms of
pragmatic quality, LIFER 3.0 is very easy (+2.0) and slightly supportive (+1.0) to
the users, but it is also moderately ine cient (-1.0) and confusing (-1.0). The
ustilisation of free-text search is probably the main factor which contributes to
the ease of using the system for the participants. Moving to the hedonic quality,
LIFER 3.0 is quite exciting and interesting to the users and they see it as a bit
usual (-0,7) system which is half way between conventional (0.0) and inventive
(0.0).</p>
      <p>Based on the evaluation results and users' feedback, we identi ed some
concrete actions that need to be realised to improve our system:
{ Continue to work on search algorithm to increase the system's e ciency
by performing better matching between queries and data and minising the
execution time.
{ Revise the user interface to present result in a clear and logical manner.</p>
      <p>Some instruction will be added to serve as system guide in order to lower
the confusion.</p>
      <p>As illustrated in the Table 3, there is a small di erence between two submitted
runs in terms of primary score. However, we observed a large gap in the secondary
score between run 1 and run 2. The di erence is the result of two approaches
in tackling the subtask 1 - pace change estimation. The average computation
approach captures the direction of pace changes better as it takes the rate of
positive and negative change of each into account. Despite of that, it fails to
estimate the change in seconds when dealing with multiple types of exercise and
training activities, which lower the secondary score. The linear regression model,
in contrast, provides a better estimation since it learns to combine the changes
in running sessions and treadmill sessions. The detailed score for each subtask
of the our baseline approaches are presented in Table 4.
In this paper, we present a baseline solution for both challenges in
ImageCLEFlifelog 2020. For SPLL task, to predict whether the change in running time per
kilometer and weight after training is improvement or deterioration, we proposed
a basic solution by accumulating the di erence between consecutive targeted
values, then compute the average of positive and negative di erence separately, and
nally sum them. For LMRT task, we introduced a baseline interactive search
engine which is derived from the LifeSeeker search engine from Lifelog Search
Challenge with three main features which are the free-text search, visual
similarity exploration and temporal views using elastic sequencing. We have successfully
established a user study and drawn many insights from the experiment in terms
of interaction and performance through the task's evaluation and users' quality
feedback. For the future development, we are aiming to improve the search
interface to present the retrieval results e ciently and continuing to work on the
core functionalities of search engine to perform better matching between queries
and dataset in order to boost the overall search accuracy.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgement</title>
      <p>This publication has emanated from research supported in party by research
grants from Irish Research Council (IRC) under Grant Number GOIPG/2016/741
and Science Foundation Ireland under grant numbers SFI/12/RC/2289 and
SFI/13/RC/2106.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buehler</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Teney</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Johnson,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gould</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Zhang</surname>
          </string-name>
          , L.:
          <article-title>Bottom-up and top-down attention for image captioning and visual question answering</article-title>
          .
          <source>In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          . pp.
          <volume>6077</volume>
          {
          <issue>6086</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baek</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>Character region awareness for text detection</article-title>
          .
          <source>2019 IEEE/CVF Conference on Computer Vision</source>
          and Pattern
          <string-name>
            <surname>Recognition</surname>
          </string-name>
          (CVPR) pp.
          <volume>9357</volume>
          {
          <issue>9366</issue>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bossard</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guillaumin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Gool</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Food-101 { mining discriminative components with random forests</article-title>
          . In: Fleet,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Pajdla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Schiele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Tuytelaars</surname>
          </string-name>
          , T. (eds.) Computer Vision { ECCV
          <year>2014</year>
          . pp.
          <volume>446</volume>
          {
          <fpage>461</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Overview of imagecle ifelog 2017: Lifelog retrieval and summarization</article-title>
          . In: CLEF (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Overview of imagecle ifelog 2018: Daily living understanding and lifelog moment retrieval</article-title>
          .
          <source>In: CLEF</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Overview of imagecle ifelog 2019: Solve my life puzzle and lifelog moment retrieval</article-title>
          .
          <source>In: CLEF</source>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dao</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vo</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Phan</surname>
          </string-name>
          , T.D., ,
          <string-name>
            <surname>Zettsu</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>BIDAL@imageCLEFlifelog2019: The Role of Content and Context of Daily Activities in Insights from Lifelogs</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dogariu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          : Multimedia Lab @
          <article-title>ImageCLEF 2019 Lifelog Moment Retrieval Task</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEURWS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Doherty</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pauly-Takacs</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caprani</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moulin</surname>
            ,
            <given-names>C.J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>O'Connor</surname>
            ,
            <given-names>N.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          :
          <article-title>Experiences of aiding autobiographical memory using the sensecam</article-title>
          .
          <source>Human{Computer Interaction</source>
          <volume>27</volume>
          (
          <issue>1-2</issue>
          ),
          <volume>151</volume>
          {
          <fpage>174</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gemmell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bell</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lueder</surname>
          </string-name>
          , R.:
          <article-title>Mylifebits: a personal database for everything</article-title>
          .
          <source>Commun. ACM</source>
          <volume>49</volume>
          (
          <issue>1</issue>
          ),
          <volume>88</volume>
          {
          <fpage>95</fpage>
          (
          <year>2006</year>
          ), http://dblp.unitrier.de/db/journals/cacm/cacm49.htmlGemmellBL06
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hopfgartner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albatal</surname>
          </string-name>
          , R.:
          <article-title>Overview of ntcir-12 lifelog task</article-title>
          .
          <source>In: NTCIR</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hopfgartner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albatal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
          </string-name>
          , D.T.D.:
          <article-title>Overview of ntcir-13 lifelog-2 task (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hopfgartner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albatal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Healy</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Overview of the ntcir-14 lifelog-3 task (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jonsson</surname>
            ,
            <given-names>B.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lokos</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Hurst, W.,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          , Scho mann, K.:
          <article-title>Introduction to the third annual lifelog search challenge (lsc'20)</article-title>
          .
          <source>In: Proceedings of the 2020 International Conference on Multimedia Retrieval</source>
          . p.
          <volume>584</volume>
          {
          <fpage>585</fpage>
          . ICMR '
          <volume>20</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2020</year>
          ), https://doi.org/10.1145/3372278.3388043
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schoe</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mann</surname>
            , Joho,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leibetseder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duane</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DangNguyen</surname>
          </string-name>
          , D.T.,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Loko</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Hurst, W.:
          <article-title>Paper comparing approaches to interactive lifelog search at the lifelog search challenge</article-title>
          ( lsc
          <year>2018</year>
          )
          <article-title>(</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Peteri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abacha</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Datla</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DemnerFushman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozlovski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pelka</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , l Halvorsen,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Lux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.T.</given-names>
            ,
            <surname>Chamberlain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Campello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Fichou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Berari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Brie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.D.</given-names>
            ,
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.G.</surname>
          </string-name>
          :
          <article-title>Overview of the ImageCLEF 2020: Multimedia retrieval in lifelogging, medical, nature, and internet applications</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the 11th International Conference of the CLEF Association (CLEF</source>
          <year>2020</year>
          ), vol.
          <volume>12260</volume>
          .
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer, Thessaloniki,
          <source>Greece (September</source>
          <volume>22</volume>
          - 25
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Krishna</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Groth</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Hata</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kravitz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalantidis</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>L.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shamma</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fei-Fei</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Visual genome: Connecting language and vision using crowdsourced dense image annotations (</article-title>
          <year>2016</year>
          ), https://arxiv.org/abs/1602.07332
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Laugwitz</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Held</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schrepp</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Construction and evaluation of a user experience questionnaire</article-title>
          . In: Holzinger,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (ed.)
          <article-title>HCI and Usability for Education and Work</article-title>
          . pp.
          <volume>63</volume>
          {
          <fpage>76</fpage>
          . Springer Berlin Heidelberg, Berlin, Heidelberg (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>N.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>D.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
          </string-name>
          , M.T.:
          <article-title>Lifelog Moment Retrieval with Advanced Semantic Extraction and Flexible Moment Visualization for Exploration</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEURWS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Redondo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smyth</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Lifeseeker:
          <article-title>Interactive lifelog search engine at lsc 2019</article-title>
          .
          <source>In: Proceedings of the ACM Workshop on Lifelog Search Challenge</source>
          . p.
          <volume>37</volume>
          {
          <fpage>40</fpage>
          . LSC '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2019</year>
          ), https://doi.org/10.1145/3326460.3329162
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ninh</surname>
            ,
            <given-names>V.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>T.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>H.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Healy</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Lifeseeker 2.0: Interactive lifelog search engine at lsc 2020</article-title>
          .
          <source>In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge</source>
          . p.
          <volume>57</volume>
          {
          <fpage>62</fpage>
          . LSC '
          <volume>20</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2020</year>
          ), https://doi.org/10.1145/3379172.3391724
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Loper</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bird</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Nltk: The natural language toolkit</article-title>
          .
          <source>In: Proceedings of the ACL-02 Workshop on E ective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume</source>
          <volume>1</volume>
          . p.
          <volume>63</volume>
          {
          <fpage>70</fpage>
          . ETMTNLP '
          <volume>02</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computational Linguistics, USA (
          <year>2002</year>
          ), https://doi.org/10.3115/1118108.1118117
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Ninh</surname>
          </string-name>
          , V.T.,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Healy</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smyth</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A Baseline Interactive Retrieval Engine for the NTICR-14 Lifelog-3 Semantic Access Task</article-title>
          .
          <source>In: The Fourteenth NTCIR conference (NTCIR-14)</source>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Ninh</surname>
          </string-name>
          , V.T.,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , l Halvorsen,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Lux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Gurrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          , D.T.:
          <article-title>Overview of ImageCLEF Lifelog 2020:Lifelog Moment Retrieval and Sport Performance Lifelog</article-title>
          .
          <source>In: CLEF2020 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceurws.org&gt;, Thessaloniki,
          <source>Greece (September</source>
          <volume>22</volume>
          -25
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Ninh</surname>
          </string-name>
          , V.T.,
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
          </string-name>
          , D.T.:
          <article-title>Lifer 2.0: Discovering personal lifelog insights using an interactive lifelog retrieval system</article-title>
          .
          <source>In: CLEF</source>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neves</surname>
            ,
            <given-names>A.J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliveira</surname>
            ,
            <given-names>J.L.:</given-names>
          </string-name>
          <article-title>UAPTBioinformatics working notes at ImageCLEF 2019 Lifelog Moment Retrieval (LMRT) task</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Szegedy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vanhoucke</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , Io e, S.,
          <string-name>
            <surname>Shlens</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wojna</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Rethinking the inception architecture for computer vision</article-title>
          . In: 2016 IEEE Conference on
          <article-title>Computer Vision and Pattern Recognition (CVPR)</article-title>
          . IEEE (jun
          <year>2016</year>
          ),
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Taubert</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kahl</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <source>Automated Lifelog Moment Retrieval based on Image Segmentation and Similarity Scores. In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>An interactive lifelog retrieval system for activities of daily living understanding</article-title>
          .
          <source>In: CLEF</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bai</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xia</surname>
            ,
            <given-names>J.: ZJUTCVR</given-names>
          </string-name>
          <article-title>Team at ImageCLEFlifelog2019 Lifelog Moment Retrieval Task</article-title>
          .
          <source>In: CLEF2019 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Lugano, Switzerland (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>