<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Interactive Lifelog Retrieval System for Activities of Daily Living Understanding</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Liting Zhou</string-name>
          <email>zhou.liting2@mail.dcu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Piras</string-name>
          <email>luca.piras@diee.unica.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Riegler</string-name>
          <email>michael@simula.no</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mathias Lux</string-name>
          <email>mlux@itec.aau.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Duc-Tien Dang-Nguyen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cathal Gurrin</string-name>
          <email>cathal.gurring@dcu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dublin City University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ITEC, Klagenfurt University</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Pluribus One &amp; University of Cagliari</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Simula Research Laboratory</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes the participation of the Organizer Team in the ImageCLEFlifelog 2018 Daily Living Understanding and Lifelog Moment Retrieval. In this paper, we propose how to exploit LIFER, an interactive lifelog search engine to solve the two tasks: Lifelog Moment Retrieval and Activities of Daily Living Understanding. We propose approaches for both baseline, which aim to provide a reference system for other approaches, and human-in-the-loop, which advance the baseline results.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        A new trend in multimedia research is the generation of personalized archives
that store rich detail of your life experience using various modalities such as
videos, images, text or sensor data. These logs are commonly refereed to as
lifelogs [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Lifelogs typically contain details of your life experience, such as
consumed food, visited places and many more. Such rich archives hold a lot of
potential not just for research but also for the users themselves.
      </p>
      <p>
        Lifelogging poses many challenging research questions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], such as how to
make this rich data searchable, how to extract meaningful information and how
to summarize data, etc. Regarding this, a couple of initiatives have been
organised in the last few years, for example NTCIR-13 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], ImageCLEFlifelog2018 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
at ImageCLEF 2018 [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] which the goal is to bring researchers in di erent
domains to solve the challenges in the novel research eld.
      </p>
      <p>
        In this paper we describe our solution to the 2018 Image CLEF [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] Lifelog
Task [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For our approach we exploit the fact that Lifelogs are usually
chronologically organized. Hence, moments that belong to the same activity or event
are likely to be very similar. By performing similarity or near duplicate
detection, we can group moments based on time and concepts. Tackling the problem
with this angle transforms the image retrieval challenge into an image segment
retrieval challenge.
      </p>
      <p>
        Utilizing time and concepts comes with the advantage that boundaries
between events are easily identi able. This saves both processing time and
computation power [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In addition to this we also remove images that do not contain
much information and would rather add noise to the analysis (blurry, one
object, etc.). In our past work, this was estimated to be in the region of 40% of all
images [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Images retrieved by our method are then clustered for used in other
tasks, for example, summarization, classi cation or search [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The paper is organized as follows: Firstly, we provide an overview of related
work in the eld. After that we give a detailed description of LIFER, the
interactive system that our approach is based on, which is followed by a methodology
of how to exploit the system. We then show the results obtained from the o cial
competition and nally, we discuss the solutions, the results, and conclude the
paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        In general there exits an ever increasing body of research aimed at solving di
erent aspects of the overall lifelogging information access challenge, ranging from
computer vision [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] to multidimensional visualization of lifelogging data [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        In context of our approach, several related works exist. A common practice
for image segmentation based on time data is heuristic splitting [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Another
often used technique is based on utilizing thresholds on the distances between
images related to the content [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Apart from these supervised approaches also
unsupervised methods exist [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        As with other elds of research, information retrieval currently relies heavily
on deep learning approaches [
        <xref ref-type="bibr" rid="ref14 ref19">14, 19</xref>
        ]. Current work focuses on retrieval results
that represent relevant and diverse samples of the archives [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. This trend is
also emerging in lifelogging. Fan et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] propose a deep learning approach to
perform image caption and summarization for lifelogging datasets. Nevertheless,
deep learning within lifelogging still struggles with some challenges. One of them
is that multi-modal deep learning is not well researched [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] yet. Therefore
approaches that rely on traditional methods still perform better. For example [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
where the authors rely on relevance feedback to retrieve relevant and diverse
results and at the same time keep the number of iterations low.
3
      </p>
      <p>
        LIFER: An Interactive Lifelog Search Engine
Our proposed solutions for the two tasks is to exploit the baseline system,
LIFER [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], which is an interactive engine for lifelog retrieval. LIFER is
improved upon an existing baseline search engine [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], which was developed to
provide a starting point for researchers engaged in collaborative benchmarking
exercises, such as NTCIR [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and this ImageCLEFlifelog2018 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] tasks. It was
also used for the LSC@ICMR 2018 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] competition and got a reasonable result.
In this section, we will introduce how we used the LIFER system to address the
two tasks and list the approaches we used for retrieval images information.
      </p>
      <p>
        LIFER uses the core search engine of [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] which o ers a platform which
can be used to search for images that match with some criteria. The retrieved
moments (represented by an image for each moment) are then presented to the
user in temporal order. Since the collection was small, this temporal order is
unlikely to be too large for fast human browsing and selection. This interactive
system helps user to retrieve results in a faster and reliable way, which helps to
solve both two tasks. The detailed operation will be described in Section 4.
      </p>
      <p>LIFER is built based on the six sources of information which was extracted
from ImageCLEF dataset o ered.</p>
      <p>
        { Time. The most basic unit of data in the dataset, time gave us the
possibility of including more semantic concepts, such as days of the week,
weekday/weekend, times of the day, etc. In the LIFER system, we consider the
unit of time as minute, i.e., each image is attached to a minute. These time
is extracted (and linked to the image) directly from the provided data.
{ Locations. Semantic location were provided in the dataset which provided
localised names for all locations visited. For example `The Helix', `Dunnes
stores', `Dublin City University' and so on.
{ Visual Concepts. visual concepts extracted by Microsoft API[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] was
provided, which accompany with each image. These visual concepts were
indexed in our lifelog retrieval system. Visual concepts describe the content
of the lifelog images included in the dataset. Each image has one (or more)
concepts identi ed and tagged. The concepts (in text form) were indexed.
{ User Activities. The physical activities of the user (e.g. walking, sitting,
running, etc.) were indexed as additional search terms.
{ Biometrics. The biometrics of the user were also indexed as semantic labels.
      </p>
      <p>These included the Galvanic Skin Response (stressed/excited, relaxed) which
can be considered to be a correlate of stress or excitement levels, and the
level of physical activity (exertion / resting) as identi ed from the heart rate.
{ Music. A log of the music listing history of the lifelogger was included in the
collection and we considered that it could be an important aspect of some
topics. The song name and song artist are two options which are used to
search results.</p>
      <p>These six sources of information are instantiated in the user interface as
facets of a user query, as shown in Figure 1.</p>
      <p>The Interface of LIFER is shown in Figure 2. The upper section of the
interface is the query-panel in which the faceted queries are created. Below that
is the main part of the interface which is where the selected lifelog images are
displayed in temporal sequence.</p>
      <p>In the query-panel, the search facets are shown. The facets are directly related
to the indexed data (see the six sources of information above). Upon submission
of a faceted query, the system returns a temporally organised listing of potentially
relevant images. In the rst version of LIFER, the query facets are combined</p>
      <p>Visual
Music Concept Activity Location Time Biometrics Raw Data</p>
      <p>Feature Vectors
Indexed Database</p>
      <p>API/Interface
Images</p>
      <p>A set of criteria</p>
      <p>User
in an AND boolean manner. This can be changed on a per-topic basis, but does
not form part of the interface at present.</p>
      <sec id="sec-2-1">
        <title>Search by main terms and Human ltering</title>
      </sec>
      <sec id="sec-2-2">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-3">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-4">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-5">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-6">
        <title>Notes</title>
        <p>fully automatic without ranking
fully automatic with ranking</p>
      </sec>
      <sec id="sec-2-7">
        <title>Search by main terms and Human ltering</title>
      </sec>
      <sec id="sec-2-8">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-9">
        <title>Search by relevant terms and Human ltering</title>
      </sec>
      <sec id="sec-2-10">
        <title>RunID</title>
      </sec>
      <sec id="sec-2-11">
        <title>ADLT Run 1</title>
      </sec>
      <sec id="sec-2-12">
        <title>Name</title>
      </sec>
      <sec id="sec-2-13">
        <title>Baseline</title>
      </sec>
      <sec id="sec-2-14">
        <title>ADLT Run 2* Baseline</title>
      </sec>
      <sec id="sec-2-15">
        <title>ADLT Run 3* Baseline</title>
      </sec>
      <sec id="sec-2-16">
        <title>ADLT Run 4* Baseline</title>
      </sec>
      <sec id="sec-2-17">
        <title>ADLT Run 5* Baseline</title>
      </sec>
      <sec id="sec-2-18">
        <title>RunID</title>
      </sec>
      <sec id="sec-2-19">
        <title>Name</title>
      </sec>
      <sec id="sec-2-20">
        <title>LMRT Run 1 Baseline</title>
      </sec>
      <sec id="sec-2-21">
        <title>LMRT Run 2 Baseline</title>
      </sec>
      <sec id="sec-2-22">
        <title>LMRT Run 3* Baseline</title>
      </sec>
      <sec id="sec-2-23">
        <title>LMRT Run 4* Baseline</title>
      </sec>
      <sec id="sec-2-24">
        <title>LMRT Run 5* Baseline * These runs were submitted after the competition.</title>
        <p>The temporally organised listing of relevant images is displayed in the lower
part of the screen (the result-display panel). Each relevant image is listed with
an overview metadata as a form of context. This metadata is con gurable to
display various sources of information, as required. Figure 2 shows a basic form
of such metadata.
4</p>
        <p>Exploiting LIFER for ImageCLEFlifelog2018 tasks
As mentioned, we exploited LIFER for ImageCLEFlifelog2018 LMRT and ADLT
tasks. Firstly, based on the topic description, the search criteria are determined
(by both automatically considering all the words in the queried topic as concepts,
or alternatively allowing concepts to be `determined by the user). Secondly, we
improved the interface of LIFER to allow user manually select multiple relevant
images for the submission or taking all of them as relevant.</p>
        <p>In term of ranking, we determine to use the default option of LIFER by
chronological order.</p>
        <p>In the next section, we present the o cial results on the test set exploiting
the LIFER system.
We submitted 10 runs (5 for each task) in total, summarized in Table 1 and 2.
For ADLT, the best result is made by searching the relevant keywords and using
human-in-loop to lter the irrelevant results. The remaining runs are created by
using the relevant query terms and human ltering. For LMRT, the rst two
runs are automatic while the rest three adopted the same approaches used in
ADLT. Shown in Tables 3 and 4 are the search criteria for the best run of each
task.
6</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussions and Conclusions</title>
      <p>In this paper we introduce di erent baseline approaches, from fully automatic
to fully manual approaches, by exploiting LIFER, an interactive lifelog retrieval</p>
      <sec id="sec-3-1">
        <title>Topic User T001 T002 T003</title>
        <p>system to tackle the ImageCLEFlifelog 2018 task, as a participant of the Lifelog
Moment Retrieval and Activities of Daily Living Understanding tasks. These
approaches, that require di erent levels of involvement from the users, exploit
only the information provided by the organizers along with the collection of
images, e.g., the description of the semantic locations and the physical activities.
With the human in the loop, we obtained the highest score for ADLT task (ADLT
Run 1, *please notice that we are not ranked since we are the task organiser).
However, without the manually input, the results can be close to random (as in
the result of LMRT Run 1). This shows that the key challenge is how to translate
the query to the search criteria, with requires further study.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bolanos</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mestre</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talavera</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nieto</surname>
            ,
            <given-names>X.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Radeva</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Senseseer mobilecloud-based lifelogging framework. Visual summary of egocentric photostreams by representative</article-title>
          keyframes pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          (
          <year>July 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lavelle</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Doherty</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>G.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          :
          <article-title>Using bluetooth and gps metadata to measure event similarity in sensecam images</article-title>
          .
          <source>5th International Conference on Intelligent Multimedia and Ambient Intelligence (July</source>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacinto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Natale</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          :
          <article-title>Multimodal retrieval with diversi cation and relevance feedback for tourist attraction images</article-title>
          .
          <source>ACM Transactions on Multimedia Computing</source>
          , Communications, and
          <string-name>
            <surname>Applications</surname>
          </string-name>
          (
          <year>2017</year>
          ), accepted
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Overview of ImageCLEFlifelog 2018:
          <article-title>Daily Living Understanding and Lifelog Moment Retrieval</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , CEURWS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Avignon,
          <source>France (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Challenges and opportunities within personal life archives</article-title>
          .
          <source>In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval</source>
          . pp.
          <volume>335</volume>
          {
          <fpage>343</fpage>
          . ICMR '18,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2018</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/3206025.3206040
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Doherty</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          :
          <article-title>Automatically segmenting lifelog data into events</article-title>
          .
          <source>9th International Workshop on Image Analysis for Multimedia Interactive Services (30 June</source>
          <year>2008</year>
          ), http://doras.dcu.ie/4651/
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crandall</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          : Deepdiary:
          <article-title>Lifelogging image captioning and summarization</article-title>
          .
          <source>Journal of Visual Communication and Image Representation</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iandola</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srivastava</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dollar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            , X., Mitchell,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Platt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zitnick</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zweig</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>From captions to visual concepts and back</article-title>
          . IEEE Institute of Electrical and Electronics
          <string-name>
            <surname>Engineers</surname>
          </string-name>
          (
          <year>June 2015</year>
          ), https://www.microsoft.com/en-us/research/publication/fromcaptions-to
          <article-title>-visual-concepts-and-back/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hopfgartner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albatal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , DangNguyen, D.T.:
          <source>Overview of NTCIR-13 Lifelog-2 Task. In: Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schoe</surname>
            <given-names>mann</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Joho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Dang-Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.T.</given-names>
            ,
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Piras</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          :
          <source>Lsc '18: Proceedings of the 2018 acm workshop on the lifelog search challenge. ACM</source>
          , New York, NY, USA (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>O</given-names>
            <surname>'Hare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.J.F.</given-names>
            ,
            <surname>O'Connor</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.:</surname>
          </string-name>
          <article-title>An examination of a large visual lifelog</article-title>
          . In: Li,
          <string-name>
            <surname>H.</surname>
          </string-name>
          , Liu,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Ma</surname>
          </string-name>
          , W.Y.,
          <string-name>
            <surname>Sakai</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>K.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>G</given-names>
          </string-name>
          . (eds.) Information Retrieval Technology. pp.
          <volume>537</volume>
          {
          <fpage>542</fpage>
          . Springer Berlin Heidelberg, Berlin, Heidelberg (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Doherty</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          : Lifelogging:
          <article-title>Personal big data</article-title>
          .
          <source>Foundations and Trends in Information Retrieval</source>
          <volume>8</volume>
          (
          <issue>1</issue>
          ),
          <volume>1</volume>
          {
          <fpage>125</fpage>
          (
          <year>2014</year>
          ), http://dx.doi.org/10.1561/1500000033
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>J.J.:</given-names>
          </string-name>
          <article-title>Visualizing multidimensional lifelogging data: A case study on mymoviehistory project</article-title>
          .
          <source>Cybernetics</source>
          and Systems pp.
          <volume>1</volume>
          {
          <issue>15</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Large-scale semantic web image retrieval using bimodal deep learning techniques</article-title>
          .
          <source>Information Sciences</source>
          <volume>430</volume>
          ,
          <volume>331</volume>
          {
          <fpage>348</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lupu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rohm</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          ^nsca,
          <string-name>
            <surname>A.L.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Datasets column: diversity and credibility for social images and image retrieval</article-title>
          .
          <source>ACM SIGMultimedia Records</source>
          <volume>9</volume>
          (
          <issue>3</issue>
          ),
          <volume>7</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrearczyk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farri</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lungren</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Overview of ImageCLEF 2018:
          <article-title>Challenges, datasets and evaluation. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Ninth International Conference of the CLEF Association (CLEF</source>
          <year>2018</year>
          ),
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer, Avignon,
          <source>France (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ngiam</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khosla</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nam</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          :
          <article-title>Multimodal deep learning</article-title>
          .
          <source>In: Proceedings of the 28th international conference on machine learning (ICML-11)</source>
          . pp.
          <volume>689</volume>
          {
          <issue>696</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Peitgen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Jurgens, H.,
          <string-name>
            <surname>Saupe</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Chaos and fractals - new frontiers of science (2</article-title>
          . ed.). Springer (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Saritha</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paul</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>P.G.</given-names>
          </string-name>
          :
          <article-title>Content based image retrieval using deep learning process</article-title>
          .
          <source>Cluster</source>
          Computing pp.
          <volume>1</volume>
          {
          <issue>14</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smeaton</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>Computer vision for lifelogging: Characterizing everyday activities based on visual semantics</article-title>
          . In: Computer Vision for Assistive Healthcare, pp.
          <volume>249</volume>
          {
          <fpage>282</fpage>
          .
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A baseline search engine for personal life archives</article-title>
          .
          <source>In: Proceedings of the 2nd Workshop on Lifelogging Tools and Applications</source>
          . pp.
          <volume>21</volume>
          {
          <fpage>24</fpage>
          . LTA '17,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2017</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/3133202.3133206
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinbarji</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Lifer</surname>
          </string-name>
          :
          <article-title>An interactive lifelog retrieval system</article-title>
          .
          <source>In: LSC@ICMR</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>