<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Connectivism &amp; Interactive Narrative: towards a new form of video in online education</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Frantzis</string-name>
          <email>michael.frantzis@gold.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>3rd International Workshop on Interactive Content Consumption at TVX'15</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ACM Classification Keywords H.5.1. Information Systems, Information Interfaces and Presentation</institution>
          ,
          <addr-line>Hypertext/Hypermedia</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Author Keywords Interactive</institution>
          ,
          <addr-line>narrative,education,open,resource,connectivist</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of London, International Programmes Goldsmiths College, Department of Computing</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Techniques of online learning have evolved considerably, with the introduction of Learning Management Systems, Adaptive Learning Environments and Massive Open Onlines Courses (MOOC). However, the video resources contained within them have changed little, remaining fixed duration time-based media objects. This static nature of educational video contrasts with evolving landscape of personalization and Connectivist ideas in learning systems. We propose existing models employed in interactive video narratives, combined with emerging techniques of crowds sourcing and automatic story generation, can enable a new form of educational video narrative which reflects the collaborative systems which surround it. 1 https://www.khanacademy.org interactions within communities. In the modern Connectivist era [3], the material of learning will need to have the capacity to evolve. Secondly, the growing impact of MOOCs brings to the fore the problem of scalability, from both a technological and authoring perspective. The current assumption is that a video lecture is a record-once, use-many-times resource. However, the resources consumed in terms of manpower and production expertise, and the increasing demands of a technology literate student generation, makes this model difficult to sustain. A new solution is needed to allow video to evolve to meet the needs of a new educational paradigm, one capable of harnessing the knowledge contained in online communities. Our hypothesis is that expertise and technology created in the field of interactive video narratives can be used to build interactive video objects for educational purposes, which can meet this need.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        INTRODUCTION
When the technology of motion picture was first
introduced, Thomas Edison pronounced, "the motion
picture is destined to revolutionize our educational system
and that in a few years it will supplant...the use of
textbooks”. The rise of MOOCs [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the animes used in
Khan Academy1 and other distance learning offerings
would suggest he might be right. However, whilst the
techniques of online learning have evolved considerably,
the educational video object itself has evolved little. It
remains a fixed duration time-based media object, which
students consume passively before moving onto other
interactive tasks or assessments modules.
      </p>
      <p>
        This static nature of current video offerings poses two
problems. Firstly, it is at odds with the surrounding
landscape of personalisation in learning systems, theories of
connective learning and online collaborative learning [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
where the accent is on constructing knowledge through
      </p>
      <p>
        BACKGROUND
There have been a number of methods by which researchers
and learning technologist have attempted to allow learning
materials to adapt according to user interactions.
Adaptive Learning Environments (ALE) typically hold a
model containing a user’s learning or cognitive style along
with a domain model, and attempt to use these to use these
to enable personalisation of the types of interaction offered
according to the computed learning needs of individual
students [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Work in Adaptive Hypermedia has proposed a
hypermedia based-approach to navigation through learning
materials, encouraging learning through exploration [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Also important are dynamic user models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] further
informing adaptive learning in a given course or even in
other domains.
      </p>
      <p>Whilst these approaches are responsive to students needs
their focus is on tailoring or personalisation of predefined
media (of which one type is video). They do not attempt to
use a community of learners to improve the resources
themselves; rather they alter the choice, sequencing and
presentation of a predetermined set of resources.
DISCUSSION.</p>
      <p>
        Techniques using interactive narrative have been employed
in many educational contexts. However these have tended
to involve role-playing and game-based scenarios, where
users interact with characters within simulated
environments.[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] However, new lines of enquiry are
emerging in interactive narrative research, specifically
automatic generation of interactive video narratives from
shared user-generated content (UGC) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and
crowdsourcing of interactive narratives [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. An approach
based on theories of mind and learning, narrative generation
from UGC, and crowdsourcing could offer a powerful
solution to delivering interactive video for education.
Interactions by users with interactive narrative video
objects can be recorded and structured as a representation
of knowledge. This functionality, in combination with the
capacity to add user generated content and the techniques of
automatic narrative generation, would allow video to
become an expanding and powerful resources, adapting and
growing with the inputs of student interaction, rather than
simply being consumed by students.
      </p>
      <p>Let us take as an example a student consuming a video
lecture about a specific aspect of law. He/she might find a
section where they feel they would like a lot more detail, or
they might remember a particularly good recording a fellow
student made about this particular aspect of law. In an
appropriate interface they would be able to add an
interaction point. This annotation would contain the
information that there exists another video resource
available which a fellow student felt to be valuable, related
to the content at that point in the media. This information
could be added to the narrative structures held on a server
generating the video narrative and associated playlists, and
on the next automatic generation of the lesson narrative by
a subsequent user this interaction could be displayed as an
option – the video object has expanded in the same way a
discussion forum might have expanded.</p>
      <p>The potential information derivable from interaction points
does not stop there. This user-generated interaction point,
and associated user-generated content, can be rated, both by
a teacher and other students. Aggregating these scores
could offer a means of computing the best interaction
possibilities, and in parallel enable personalisation of the
video according to social and educational context at
runtime. For example, an interface could only show
interactions from those students who are the highest rated,
or those best rated by a teacher or teachers. Incorporating
information gleaned from social networks could inform the
narrative structures even further.</p>
      <p>
        CONCLUSION
It is already a common practice to describe education as a
journey and thus as a narrative, and there are significant
parallels between learning through problem solving and
interactives narratives [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Emerging techniques of
crowdsourcing and automatic generation of stories from
UGC can now offer a valuable opportunity to create a
powerful link between emerging pedagogical techniques
from the field of Technology Enhanced Learning and the
video resource they employ.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Powell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>CETIS</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>MOOCs and open education: Implications for higher education</article-title>
          .
          <source>Cetis White Paper.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Harasim</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Learning theory and online technology</article-title>
          .
          <source>Routledge.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Siemens</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Connectivism: A learning theory for the digital age</article-title>
          .
          <source>International journal of instructional technology and distance learning</source>
          ,
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Wolf</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>iWeaver: Towards an interactive webbased adaptive learning environment to address individual learning styles</article-title>
          .
          <source>European Journal of Open and Distance Learning.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Brusilovsky</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>Methods and techniques of adaptive hypermedia</article-title>
          .
          <source>InAdaptive hypertext and hypermedia</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          ). Springer Netherlands.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Knutov</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Bra</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pechenizkiy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>AH 12 years later: a comprehensive survey of adaptive hypermedia methods and techniques</article-title>
          .
          <source>New Review of Hypermedia and Multimedia</source>
          ,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <fpage>5</fpage>
          -
          <lpage>38</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>De</surname>
            <given-names>Bra</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Houben</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            , &amp;
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          (
          <year>1999</year>
          ,
          <article-title>February)</article-title>
          .
          <article-title>AHAM: a Dexter-based reference model for adaptive hypermedia</article-title>
          .
          <source>In Proceedings of the tenth ACM Conference on Hypertext</source>
          and
          <article-title>hypermedia: returning to our diverse roots: returning to our diverse roots</article-title>
          (pp.
          <fpage>147</fpage>
          -
          <lpage>156</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lees</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>A review of interactive narrative systems and technologies: a training perspective</article-title>
          .
          <source>Simulation</source>
          ,
          <volume>0037549714566722</volume>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Zsombori</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frantzis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guimaraes</surname>
            ,
            <given-names>R. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ursu</surname>
            ,
            <given-names>M. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cesar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kegel</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Bulterman</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          (
          <year>2011</year>
          , June).
          <article-title>Automatic generation of video narratives from shared UGC</article-title>
          .
          <source>In Proceedings of the 22nd ACM conference on Hypertext and hypermedia</source>
          (pp.
          <fpage>325</fpage>
          -
          <lpage>334</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee-Urban</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          (
          <year>2012</year>
          ,
          <article-title>October)</article-title>
          .
          <article-title>Toward autonomous crowd-powered creation of interactive narratives</article-title>
          .
          <source>In 5th Workshop on Intelligent Narrative Technologies</source>
          , Palo Alto, CA (Vol.
          <volume>8</volume>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>52</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Rowe</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shores</surname>
            ,
            <given-names>L. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mott</surname>
            ,
            <given-names>B. W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lester</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Integrating learning, problem solving, and engagement in narrative-centered learning environments</article-title>
          .
          <source>International Journal of Artificial Intelligence in Education</source>
          ,
          <volume>21</volume>
          (
          <issue>1</issue>
          ),
          <fpage>115</fpage>
          -
          <lpage>113</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>