<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Future of the Past</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hiroshi Ishii</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Pillis</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pat Pataranutaporn</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucy Li</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiao Xiao</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>JB Labrune</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Media Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Media Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fluid Interfaces</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Media Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Media Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Media Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MIT Media Lab</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>There is a growing interest in novel interaction paradigms that reevaluate our relationship with time. This position paper discusses current and future work exploring the intersection of user experience design with innovative interfaces engaging temporal and cultural perspectives. We introduce three project perspectives for discussion and reflection.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Time</kwd>
        <kwd>Cultural heritage</kwd>
        <kwd>Human-Computer Interaction</kwd>
        <kwd>Intelligent User Interfaces</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Within contemporary human computer interaction studies, an area of growing interest is the
development of novel interaction paradigms that reevaluate our relationship with time. In the
following position paper, we discuss current and future work exploring the intersection of
contemporary user experience design with a variety of innovative interfaces designed to engage
with temporal and cultural perspectives. Each of the projects discussed below are designed to
influence users’ perceptions of time and the self, enhancing how we interact with the past and
the present, ultimately evolving a vision for a new future experience of the past.</p>
      <p>Our contribution to the 2024 Intelligent User Interfaces workshop on ‘Past Meets Future’
introduces three categories of projects exploring the idea of human computer interaction in
the past. First, we would like to present our vision of TeleAbsence, which evokes the memory
of lost loved ones through the tangible materials they have left behind. Secondly, we discuss
interfaces that reconsider our relationships with our past and future selves. Finally, we discuss
work exploring the potential of AI embodied in virtual characters, which can speak across time.
These three project categories are described in detail below:
1. TeleAbsence Interfaces for Communicating through Time;
2. Human AI Interfaces for Integrating with Past, Future and Alternative Selves;
3. AI-Generated Characters as Digital Mementos;</p>
    </sec>
    <sec id="sec-2">
      <title>2. TeleAbsence Interfaces for Communicating through Time</title>
      <sec id="sec-2-1">
        <title>2.1. TeleAbsence Interfaces</title>
        <p>
          Our vision of TeleAbsence is an interpretation of telepresence that, unlike telepresence’s focus
on asynchronous communication across physical distance, instead addresses emotional and
temporal distance caused by the loss or fading memory of loved ones. TeleAbsence interfaces
are designed to foster ‘illusory communications’, conjuring the feeling of being there with
those no longer with us. Our vision of TeleAbsence questions how to be remembered and how
to remember, a transcendent approach to HCI to integrate the human spirit with the design
potentials of human-computer interaction, evoking ideas like traces of reflection [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and remote
time [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. We will present future and present work within our TeleAbsence vision, including
crowd sourced local histories of architectural environments in mixed reality environments and
prompt-based storytelling technologies for memory re-enactment.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Crowd Sourcing Memories for Augmented Reality Interactions</title>
        <p>Our development of TeleAbsence has primarily explored frameworks for enabling individuals
to connect with personal relationships outside the boundaries of biological life. Current
TeleAbsence research focuses on crowd-sourcing photography, media, and ephemeral paraphernalia to
document the architectural intelligence of the MIT Media Lab. Further current work introduces
an interface for remembering the past at the intersection of photography, spatial media, and
interactive simulation spaces. Notably, our vision of TeleAbsence abstains from relying solely
on Artificial Intelligence, and in this way, ofers a novel approach to connecting with the past.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. TeleAbsence Interfaces for Remote time</title>
        <p>To store the past in a simulation may enable greater understanding of ourselves, our stories,
and our past histories. Our project proposes techniques and technologies that will enhance
an individual’s ability to remember the present in the future, to mourn the loss of time, and
to remember and commemorate past experiences. Taking as our source a dataset of human
narratives derived from physical records and ephemera, we aim to examine the potential of
interfaces focused on the TeleAbsence principle of ‘remote time’ by creating toy AI simulations
of architecture. We present two example scenarios that explore generative human narratives
through artificial simulations of the past. In each scenario, we are exploring ways to relocate
lost spaces and places in a person’s life.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Interacting with Past, Future and Alternative Selves</title>
      <p>Our experience of the body and our experience of the mind relate in ways which are not always
synchronous or seamless. Projects discussed below explore the multiplication of the self through
the use of artificial intelligence as well as simulated conversations between versions of the self
across time. In these projects and others we explore the implications of human/AI interaction
for past, future and alternate versions of the self.</p>
      <sec id="sec-3-1">
        <title>3.1. Machinoia</title>
        <p>
          Recent findings have demonstrated that our minds can phenomenologically inhabit multiple
bodies throughout the duration of an individual’s life. As an exploration of this concept,
we present Machinoia [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], a symbiotic augmentation that extends a user’s persona with two
additional heads, each of which are unique variations of the user’s identity: who you once were,
and who you’ll eventually become. We used a generative adversarial network to synthesize
life-like human faces which were controlled through artificial attitude models extracted from
social media data of the wearer. The resulting wearable interface achieves a visualization of
“artificial personal intelligences” of the wearer, bringing to life past and future versions of
oneself.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Future You</title>
        <p>"Future You" is an AI platform that enables users to interact with a customized digital twin,
using a tailored Generative Pre-trained Transformer (GPT) to simulate a real time virtual version
of a user’s future self. The platform was inspired by a psychological theory that found having a
clear vision of one’s future self can influence positive long-term behavior and an overall higher
quality of life. The ‘Future You’ system generates synthetic memories for users’ future selves,
representing a potential version of their life story at age 60. This enables users to engage with
diferent virtual selves at various ages. Initial study results showed that 70% of 188 participants
reported feeling as if they had conversations with their future selves. This suggests potential
benefits for the application of AI systems such as this in reducing negative feelings and anxiety,
while fostering positive emotions and motivation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. AI-Generated Characters as Digital Mementos</title>
      <p>Every human culture has developed practices and rituals associated with remembering people
of the past - be it for mourning, cultural preservation, or learning about historical events. To
remember individuuals, we have explored a variety of interfaces, including digital mementos
and interactive installations.</p>
      <sec id="sec-4-1">
        <title>4.1. Living Memories</title>
        <p>
          To investigate the application of artificial intelligence to our experience of personal recollections,
we explored the concept of “Living Memories”[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]: interactive digital mementos that are created
from journals, letters and data that an individual has left behind. Like an interactive photograph,
living memories can be talked to and asked questions, making the knowledge, attitudes and
past experiences of a person easily accessible. To demonstrate our concept, we created an
AI-based system for generating living memories from any data source. In our initial study, we
implemented living memories of the three historical figures, “Leonardo Da Vinci”, “Murasaki
Shikibu”, and “Captain Robert Scott”.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Newell Simon Simulation</title>
        <p>In our TeleAbsence research, we have also explored well known public figures, such as in
the mixed reality installation Newell Simon Simulation, which questioned how we can access
and enter the mental spaces of those that we admire or know from a collective and cultural
perspective. The project Newell Simon Simulation demonstrates further possibilities of placing
TeleAbsence experiences in physically located spaces using augmented reality interfaces to
explore human/AI interaction. This TeleAbsence Interface focused on two lost historical figures,
Allen Newell and Herbert Simon, who in the 1950s’ developed the first artificial intelligence
program that could “solve problems like a human”, a program named “The Logic Theorist”. In
1956, they presented their ideas at a computer science conference at Dartmouth, a conference
that has since been widely considered to be “the birth of artificial intelligence”. The Newell
Simon Simulation installation incorporated both computer-generated and analog interactive
experiences in a large-scale mixed-reality environment. This room scale interface provided an
engaging way for visitors to learn about the origins of artificial intelligence, by “embodying”
the original researchers point of view, while interacting with their research in an AI-integrated
augmented reality format.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Conclusion</title>
        <p>In anticipation of the future of new interfaces for interacting with the past, our exploration of
novel interaction paradigms within contemporary human-computer interaction studies has led
to exciting works that reevaluate our relationship with time. This position paper presents three
categories of projects at the forefront of these conversations.</p>
        <p>Our TeleAbsence vision focuses on fostering illusory communications through interfaces
that address the emotional and temporal distance caused by loss, with forthcoming projects
on crowd-sourced architectural histories and interactive memory experiences, each which
interrogate our connection to past places and lost loved ones. Interacting with past, future, and
alternative selves in projects like ‘Machinoia’, or "Future You," an AI platform simulating users’
future selves, are each projects that stimulate user-centered relationships across time. Finally,
works on AI-Generated Characters as Digital Mementos such as "Living Memories," featuring
interactive digital mementos created from personal data, and the Newell Simon Simulation,
exploring historical figures in a mixed-reality environment.</p>
        <p>These projects collectively contribute to evolving a vision for a new future experience of
the past, redefining human-computer interaction through innovative interfaces and temporal
perspectives. In these projects and others, we ask many questions. What new cognitive
afordances do these interactive interfaces with time enable? How will human/AI interaction
transform cultural studies, anthropology, and our approach to looking back at the past, in a
future that is even more saturated with artificially intelligent interfaces?</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ishii</surname>
          </string-name>
          , Reflections: “
          <article-title>the last farewell”: traces of physical presence</article-title>
          ,
          <source>Interactions</source>
          <volume>5</volume>
          (
          <year>1998</year>
          )
          <fpage>56</fpage>
          -
          <lpage>f</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Aguilera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ishii</surname>
          </string-name>
          ,
          <article-title>Mirrorfugue iii: conjuring the recorded pianist</article-title>
          ., in: CHI Extended Abstracts, Citeseer,
          <year>2013</year>
          , pp.
          <fpage>2891</fpage>
          -
          <lpage>2892</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pataranutaporn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Danry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Maes</surname>
          </string-name>
          , Machinoia, machine
          <article-title>of multiple me: Integrating with past, future and alternative selves</article-title>
          ,
          <source>in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems</source>
          , ACM, New York, NY, USA,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pataranutaporn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Danry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Blanchard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Thakral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ohsugi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Maes</surname>
          </string-name>
          , M. Sra,
          <article-title>Living memories: AI-generated characters as digital mementos</article-title>
          ,
          <source>in: Proceedings of the 28th International Conference on Intelligent User Interfaces</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>