<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Predicting the Topics to Review in Preparation of Your Next Meeting</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Informatics, University of Lugano (USI)</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Memory augmentation is the process of providing human memory with information that facilitates and complements the recall of an event in a person's past. In this paper, we propose a novel time-series method for predicting the topics that one should review in preparation of one's next meeting. This can be seen as a way of augmenting human memory. Since the number of topics that are discussed in di erent meetings of a typical person might be very large, there is a need for detecting topics that are more likely to continue in subsequent meetings, in order to focus one's attention just on them. Our experimental results on real-world data demonstrate that our proposed method signi cantly outperforms the state-of-the-art Hidden Markov Model (HMM) baseline.</p>
      </abstract>
      <kwd-group>
        <kwd>Human Memory Augmentation</kwd>
        <kwd>Topic Prediction</kwd>
        <kwd>Workplace Meetings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Human memory is a critically important cognitive ability that we constantly rely
on. However, sometimes due to the volume and intensity of information that we
are exposed to, or due to lack of adequate attention, or due to aging, this critical
cognitive ability fails to recall important events in our past. Augmentation of
human memory in a workplace environment can e ectively serve as a solution
in preventing failure to recall past events [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        In this paper, we focus on tracking one's meetings with the purpose of
memory augmentation. We use a real-world dataset of weekly meetings of seven
groups of people. In this dataset, we recorded real workplace meetings of each
group (where a group consisted of two individuals) over a span of an entire
month. We compare our novel method against the HMM baseline for predicting
the topics of a conversation that will be continued from previous meetings.
Psychology of human memory has comprehensively studied how human
memory recalls events or forgets them. One important work in this domain was the
forgetting curve by Ebbinghaus. The forgetting curve (which is an
exponentially decreasing curve) shows that a human forgets on average about 77% of
the details of what one has learned after six days. This motivated our goal in
augmenting human memory to assist one in recalling more details of one's past
events. Additionally, our study is motivated by a memory augmentation tool that
we have already developed and deployed in the context of a project1 for aiding
people's memory in their workplace meetings [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This system takes as input
transcriptions of audio recordings of one's conversations and images taken
automatically by one's wearable camera. Both media types are time synchronized.
The tool then processes the data by modeling the topics of the transcribed
conversations and connecting the topics with their corresponding images. We use
Latent Dirichlet Allocation (LDA) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] as our topic modeling approach.
      </p>
      <p>
        Furthermore, the bene t of this research work is endorsed by relevant studies
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] which showed that for people, replaying their lives to them have signi cant
e ect in helping them better recall and remember past events.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>Methodology</title>
      <p>In this section, we brie y present our new method as well as the baseline method
for predicting the topics to be reviewed in preparation of one's next meeting.
3.1</p>
      <p>Our method
Our method is a hybrid that combines two e ects that we refer to them as
recency and establishment.</p>
      <p>Recency. The recency e ect, modi es the ranking of topics by assigning
higher weights to topics of the most recent previous meetings.</p>
      <p>Therefore, this e ect assigns higher weight to a word which has occurred in
the most recent meeting. As a result, a word vector based on the recency e ect
is produced.</p>
      <p>Establishment According to the establishment e ect the assigned
probability scores are higher for the words which have persisted over time.</p>
      <p>Therefore, the word vector constructed by averaging all topics in all n
previous meetings based on the establishment e ect will be a representation of
average occurrence of each word, where the most established (i.e. persisting in
occurrence) words are assigned higher weights.</p>
      <p>Combining Recency and Establishment. We combine the two e ects
using a dynamic method by weighting each e ect in an evolutionary process.
This method integrates scores from the recency and the establishment e ects
using linear interpolation for a meeting at time slice t such that:
Scoret = we;t</p>
      <sec id="sec-2-1">
        <title>Scoreestablishment + wr;t</title>
      </sec>
      <sec id="sec-2-2">
        <title>Scorerecency</title>
        <p>(1)
where Scoreestablishment and Scorerecency are computed by the establishment
and the recency e ects, respectively. Furthermore, we;t and wr;t are
establishment weights and recency weights computed in a measurement process.
1 http://recall-fet.eu/</p>
        <p>Lecture Notes in Computer Science: Authors' Instructions</p>
        <p>
          The measurement process consists in using a topic linking module. In our
previous work [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], we introduced a topic model for tracking the evolution of
intermittent topics over time. We use this system to link together similar topics
over time and based on that compute the weights of each e ect. At the end in
order to determine the probability of a topic continuing in a future meeting, we
use an energy function.
3.2
        </p>
        <p>
          Hidden Markov Model Baseline
HMMs [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] have been extensively used for modeling multivariate time series and
predicting next states. In [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] a number of papers that describe domains were
HMMs hold the state-of-the-art performance are enlisted. Thus, in this paper
we use HMM as a baseline for our benchmark. The HMM we implement is
Gaussian. For determining the number of HMM output states we use the Bayesian
Information Criterion to nd the optimal number of output states given the data
of each set of four meetings. Finally, after training the HMM model with the
topics of the rst n meetings we measure the likelihood of each of the topics
that we want to predict its continuation under the trained model. The result
is a likelihood score per topic. We normalize the likelihood scores by dividing
each of them by the maximum likelihood score. Finally, we compute the optimal
threshold using n-fold cross validation.
4
4.1
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Experimental Setup</title>
      <p>Dataset Description
Our dataset consists of recordings of workplace meetings of 7 groups of people.
Each group consisted of two members. For each group, the audio of 4 consecutive
meetings over four weeks were recorded. Our dataset is real-world and captured
in the wild, meaning that the involved participants were asked to simply have
their usual meetings with no regulations imposed.</p>
      <p>
        LDA topics were extracted from the transcriptions of meetings. Since the
number of topics discussed in two di erent meetings might vary, it is important
to estimate the number of topics per each meeting. For this purpose, similar to
the method proposed in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we went through a model selection process.
      </p>
      <p>The extracted LDA topics from the rst 3 meetings of each group were then
manually labeled based on whether or not they had continued in the 4th meeting.
The resulting number of labeled topics to be predicted out of all 7 groups is 205.
Our aim is to correctly predict the assigned labels.
4.2</p>
      <p>Evaluation
In our rst experiment, we compute precision and recall values of our proposed
method and compare it with the HMM baseline. Table 1 shows precision values
at di erent levels of recall and for all decision thresholds. The values are obtained
from interpolated precision-recall curves. The table shows that the our method
outperforms the HMM baseline in terms of precision at all levels of recall.</p>
      <p>Furthermore, we computed Mean Average Precision (MAP) of our method
which was 69:73% against that of HMM which was 55:69%. Additionally, we
computed the F1 measures using 7-fold cross validation for each of the seven
groups. We can observe through this experiment that our method signi cantly
outperforms the HMM baseline with an F1 measure of 76:87% versus 58:43%.
We con rmed the signi cance in performance di erence using a paired t-test.</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In this paper, we introduced the problem of predicting topics to be reviewed in
preparation of one's next meeting to augment one's memory. For this purpose,
we proposed a novel method and compared it against an HMM baseline.</p>
      <p>The developed method could be implemented as a part of a proactive memory
augmentation system that aids people in their every day lives.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bahrainian</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Crestani</surname>
          </string-name>
          .
          <article-title>Cued retrieval of personal memories of social interactions</article-title>
          .
          <source>In Proc. of the First Workshop on Lifelogging Tools and Applications</source>
          , LTA '
          <volume>16</volume>
          , pages
          <fpage>3</fpage>
          {
          <fpage>12</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bahrainian</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Mele</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Crestani</surname>
          </string-name>
          .
          <article-title>Modeling discrete dynamic topics</article-title>
          .
          <source>In Proc. of the Symposium on Applied Computing, SAC '17</source>
          , pages
          <fpage>858</fpage>
          {
          <fpage>865</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>A.</given-names>
            <surname>Bexheti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Niforatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bahrainian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Langheinrich</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Crestani</surname>
          </string-name>
          .
          <article-title>Measuring the e ect of cued recall on work meetings</article-title>
          .
          <source>In Proc. of the ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct</source>
          , pages
          <volume>1020</volume>
          {
          <fpage>1026</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>D. M. Blei</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Ng</surname>
            , and
            <given-names>M. I.</given-names>
          </string-name>
          <string-name>
            <surname>Jordan</surname>
          </string-name>
          .
          <article-title>Latent dirichlet allocation</article-title>
          .
          <source>J. Mach. Learn. Res.</source>
          , pages
          <volume>993</volume>
          {
          <fpage>1022</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. T. L.
          <article-title>Gri ths and M. Steyvers. Finding scienti c topics</article-title>
          .
          <source>Proc. of the National academy of Sciences</source>
          , pages
          <volume>5228</volume>
          {
          <fpage>5235</fpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>M.</given-names>
            <surname>Pietrzykowskiand</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Salabun</surname>
          </string-name>
          .
          <article-title>Applications of hidden markov model: stateof-the-art</article-title>
          . In
          <source>International Journal of Computer Technology and Applications</source>
          , pages
          <volume>1384</volume>
          {
          <fpage>1391</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>L. R.</given-names>
            <surname>Rabiner</surname>
          </string-name>
          .
          <article-title>A tutorial on hidden markov models and selected applications in speech recognition</article-title>
          .
          <source>In Proc. of the IEEE</source>
          , pages
          <volume>257</volume>
          {
          <fpage>286</fpage>
          .
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>A.</given-names>
            <surname>Sellen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fogg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aitken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hodges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rother</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Wood</surname>
          </string-name>
          .
          <article-title>Do lifelogging technologies support memory for the past</article-title>
          ?
          <source>In Proc. of the Conference on Human Factors in Computing Systems</source>
          , pages
          <fpage>81</fpage>
          {
          <fpage>90</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>