<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>mendation of Online Learning Videos Based on Concept Maps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sungbeom Lee</string-name>
          <email>sungbeomlee@donga.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bumku Choi</string-name>
          <email>choirnony00@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jungkyu Han</string-name>
          <email>jkhan@dau.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sejin Chun</string-name>
          <email>sjchun@dau.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Engineering, DongA University</institution>
          ,
          <addr-line>Busan</addr-line>
          ,
          <country country="KR">South Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Workshop Proce dings</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1828</year>
      </pub-date>
      <abstract>
        <p>The global proliferation of COVID-19 has catalyzed a substantial transition from traditional educational settings to online learning environments. This shift has precipitated exponential growth in online educational content on video-sharing platforms like YouTube. However, this abundance of content often leaves learners navigating from the massive number of videos. Many learners struggle to identify learning videos that align with their learning objectives. To cope with the challenge, we present CONREC, a prototype application for online learners that recommends the next learning videos based on concept maps as a network of knowledge the user has learned. CONREC features an adaptive recommendation that re-ranks the candidates of learning videos based on the combined scores between inter-concepts learned and not learned by the user. We implemented a general-purpose interface that allows learners to continuously watch new learning videos and browser the concept maps of the current state in a visual form.</p>
      </abstract>
      <kwd-group>
        <kwd>Concept maps</kwd>
        <kwd>Knowledge graph</kwd>
        <kwd>Learning video recommendation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>The global spread of COVID-19 has led many students to</title>
        <p>ucation space, ranging from higher education to personal
guidance. This has opened new possibilities for Massive</p>
      </sec>
      <sec id="sec-1-2">
        <title>Open Online Courses (MOOCs)[1, 2] and Open Educa</title>
        <p>
          tional Resources (OER)[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], innovative educational
media that replace the traditional education system[
          <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
          ].
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>Many learners choose remote learning processes over</title>
        <p>cause of the excellent advantages of online education
such as flexibility and convenience.</p>
        <p>
          Recently, a video-sharing platform like YouTube has
the potential to provide an elastic and liberal medium for
knowledge sharing and instruction[
          <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
          ], where anyone
who wants to teach can teach and anyone who wants to
learn can access the contents at no or little cost[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. the
video-sharing platforms ofer numerous ways in which
paced and socially engaging[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Many learning videos
in the video-sharing platform tend to possess a good
range of diversity in terms of topics, formats, and scopes
due to the open and social nature of the platform.
        </p>
      </sec>
      <sec id="sec-1-4">
        <title>However, despite these advantages, learners often ifnd themselves lost in a massive number of educational</title>
        <p>nEvelop-O
CEUR
htp:/ceur-ws.org
ISN1613-073</p>
        <p>
          CEUR
videos, losing their way toward their learning goals.
The massive number of online learning videos can
become a hindrance to learning, with the challenge being
videos[
          <xref ref-type="bibr" rid="ref10 ref12 ref5">12, 5, 10</xref>
          ]. An online search of learning videos in
the video-sharing platforms reveals thousands of videos
with varying content, presentation styles, duration, and
video quality. It is hard to find learning videos that are
suitable to background knowledge, learning objectives,
and preferred learning. style[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Moreover,
navigatthrough video content, which means watching parts of
the video to determine its appropriateness[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. This takes
a long time for just one video and clearly cannot be
extended to a practical scale across multiple videos[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>Our main contribution are as follows. To address these
challenges aforementioned, we propose a CONREC
system that recommends the next learning videos based
on knowledge(a.k.a concept) learned from a number of
ifrst to show that CONREC ranks recommended videos
based on the user’ understood and not-yet-understood
knowledge as a concept map, prioritizing videos that
address the latter. Secondly, we present a general-purpose
pipeline that extracts semantic-enriched, highly-relevant
concepts from a learning video. Thirdly, we propose a
recommendation method that computes the similarity
between the user’ concept map and concepts in the learning
video. Last, we implement an web-based learning
application that supports an adaptive navigation of learning
videos as well as a visual interface of concept maps.</p>
        <p>Our code and demo of CONREC is available in</p>
        <sec id="sec-1-4-1">
          <title>2.1.2. Video segmentation</title>
        </sec>
      </sec>
      <sec id="sec-1-5">
        <title>Segmentation[5] is utilized to systematically represent</title>
        <p>the sequential information in learning videos. From the
learning videos returned by the search, we extract
subtitles, and a boundary-based segmentation is performed on
the subtitle text to extract a certain number of concepts.
When video texts are provided in the learning video, a
list of all texts present in the video is first retrieved. From
this list, we determine the existence of video texts and</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. CONREC</title>
      <p>hosts complimentary lectures from globally renowned
universities such as MIT, Caltech, and Harvard.
FurtherThis section introduces an overview of CONREC. We first more, YouTube provides an API, which is instrumental in
describe how to extract semantic and high-relevant con- extracting both learning videos and their corresponding
cepts within content. The extracted concept can be added videos text.
to the user’ concept map or used to model information of We extract a limited range of lecture videos to extract
a learning video. Next, we present our recommendation videos related to the topic. When a user inputs a search
method that computes the similarity between concept term and presses the search button, the number of videos
vectors as a one-hot representations of concepts. It also returned from the YouTube API is retrieved. We
selechelps to re-rank rankings of candidate videos by contin- tively filter out videos that are not relevant for learning
uously applying to them. purposes, specifically those that are either too short (less
than 10 minutes) or excessively long (more than 120
min2.1. Concept Extraction utes). Overly brief videos may not encompass suficient
knowledge content, while exceedingly long videos could
potentially impair the concentration of learners.</p>
      <sec id="sec-2-1">
        <title>In this subsection, we model knowledge of the video con</title>
        <p>tent as concepts that subjects taught in the learning video.
To extract concepts from the video, we introduce data
sources used, video segmentation, and concept
acquisition.</p>
        <sec id="sec-2-1-1">
          <title>2.1.1. Data sources</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Our data source is YouTube video-sharing platform as our primary source that boasts approximately 37 million pieces of content, encompassing videos in a multitude of languages from across the globe. Notably, YouTube</title>
        <p>1–6
a concept vector in which concepts in the   are
understood explicitly by the user.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Similarly, we present the definition of learned concepts as follows.</title>
        <p>extract the corresponding texts. We have specifically
set up fixed interval of 5 minutes, which typically
corresponds to approximately 300-400 words. In addition, we
have set the margin to include words within a range of 5
seconds in the existing segmentation, preventing them
from being included in the following segment. We have
made it possible to adjust the range of the margin based
on input parameters. The adjustment of the margin can
be seen in Figure 1(A).</p>
        <sec id="sec-2-3-1">
          <title>2.1.3. Concepts acquisition</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>To obtain concepts from the texts of the segments, we</title>
        <p>
          used Wikification[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], a method that utilizes information
existing in Wikipedia, such as pages, page texts, anchor
texts, and links. It automatically extracts concepts from
sentences and identifies semantically similar subjects in
Wikipedia based on the words used in the sentences.
2.2. Recommendation Based on Concept
        </p>
        <p>Maps</p>
      </sec>
      <sec id="sec-2-5">
        <title>In this subsection, we define and present several con</title>
        <p>cepts related to the concept map-based recommendation • ⃖⃖⃖′⃖⃗ is the concept vector in which concepts in the
method. learning video   understood explicitly by the</p>
        <p>A collection of learning videos is a finite set of user.
videos related to a specific subject, such as the recom- • ⃖⃖⃖⃗′ is the concept vector in which concepts are
mendation system, denoted as  = { 1,  2, ...,   }, where learned from a multiple number of learning
  represents a learning video. Each video  is consid- videos the user has watched. Note that ⃖⃖⃖⃗′ is all
ered a document containing its video texts, which are of the concept maps learned from the learning
provided in the form of video subtitles or speech scripts. videos that the user watched related to a
particu</p>
        <p>Concept map represents the set of concepts that can lar subject.
be learned or have been learned from the watched videos.</p>
        <p>Let us denote the concept map as  =  ∪  ′, where Given  ,   , and  , our recommendation method
com and  ′ indicate sets of learnable or learned concepts, putes the similarity between concept vectors as the
folrespectively. Both sets contain one or more concepts, lowing equation.
referred to as  = { 1,  2, ...,   }, where each concept   is
mentioned in the video texts.</p>
        <p>For eficient comparison between concept maps, we ( ,   ,  ) =  ⋅ ( ⃖⃖⃖⃖⃗, ⃖⃖⃖⃖⃗) + (1 −  ) ⋅ ( ⃖⃖⃖⃗′ − ⃖⃖⃖′⃖⃗, ⃖⃖⃖⃖⃗)
vectorize the concept map using a one-hot representation (1)
of concepts, which is referred to as the concept vector , where  function denotes either Jaccard or Cosine
and denoted as ⃖⃗. The size of ⃖⃗ is determined by the similarity computations and  denotes a weight value
nuFmobrecrlaorfitcyo,nwceepvtisewin le.arnable concepts as concepts betHweereen, 0( an⃖d⃖⃖⃖⃗1,.⃖⃖⃖⃖⃗) determines how many learnable
that are not clearly understood by a user from he most- concepts are included in the new video. ( ⃖⃖⃖⃗′− ⃖⃖⃖′⃖⃗, ⃖⃖⃖⃖⃗)
recent learning video the user has watched. In other reduces the importance of previously-learned concepts
words, learnable concepts are concepts that the user while maintaining concepts learned from the most-recent
wants to learn from another lecture video. We define learning video.
learnable concepts from the most-recent learning video The weight values can determine the degree of
importhey watched as follows. tance of the learning video containing learning concepts.
In other words, we can restrict videos containing
learnable concepts from being ranked high when
recommending the next videos. It is worth noting that we allow the
user to adjust  according to the domain.
• Given the most-recent learning video   , ⃖⃖⃖⃖⃗ is a
concept vector in which concepts in the   are
not yet understood by the user. Otherwise, ⃖⃖⃖′⃖⃗ is</p>
        <p>1–6</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Demonstration</title>
      <sec id="sec-3-1">
        <title>This section presents our implementation.</title>
        <p>3.1. Application Overview
CONREC supports an adaptive navigation of video
content for learners to explore videos based on the concept
map of a user. It searches YouTube videos based on the
concept map given by users and return re-rank videos
based on the score calculated by our recommendation
method. Figure 1 shows four panels of the main interface:
(A) User Configuration, (B) Search based on concept map,
(C) Feature tabs, and (D) Contents.
3.2. Main Features</p>
      </sec>
      <sec id="sec-3-2">
        <title>In this subsection, we give an explanation of user config</title>
        <p>uration, search module, and feature tabs.
3.2.1. User configuration The feature tabs consists of four sub-tabs: new learning,
In the sidebar panel, these parameters are used to spec- history, concept map, and watching tabs.
ify configurations according to the characteristics of the In the new learning tab, a user can see candidates
learning domain. There are four options that users can of learning videos through the recommendation system
determine (1) the number of videos to search, (2) how when searched in the top search bar. In the Figure 1 (D),
many seconds to divide the video to extract key concepts, each video is composed of four components: the watch
(3) how many concepts to extract from the sections di- button, the video, the recommendation score, the video
vided by seconds in the video, and (4) how much weight title, and the video description. First, based on the results
to give when measuring similarity. Especially, he weight of a keyword search in the search box, learning videos
value indicates the learnable concepts in the video. It appear for the learners. When the ’watch button’ of a
means value closer to 1 suggests that the video contains displayed video is clicked, the selected video is classified
learnable concepts, while a value closer to 0 indicates that as a ’watched video’, and later, the concepts learned from
the video primarily consists of learned concepts. With that video are used for recommendations. Furthermore,
this parameter, users can tailor and receive video lecture the clicked video can be viewed in the ’watching video
recommendations based on their preferences. tab’. The recommended score represents a value
calculated by the recommendation algorithm. The higher the
recommended score a video has, the higher it is placed.
3.2.2. Search module based on concept map In the history tab, it displays a list of videos that the
When the user enters a keyword into the search bar and user has watched. The list includes the video, video title,
clicks the search button, the system requests the search a re-watch button, and segment information about the
keyword to the YouTube API. The videos are retrieved concepts contained in the video. The Re-Watch button
according to the number of videos and stored as a video allows you to re-watch the video, enabling learners to
object. Each video object contains the title, URL, video re-learn from sections they might not have fully grasped
description, and video length. Next, it extracts video texts initially. Additionally, learners can check segment
inforfrom the selected videos when they provides subtitles. mation for the videos they’ve watched, which displays
Otherwise, we obtain it through automatic speech recog- information about learnable and learned concepts for
nition (ASR) provided by the YouTube platform. We then each segment of the video. This segment information is
divide the video texts into a fixed number of segments. provided in table, with each concept marked as either
After the division, it apply the Wikification method to 0 or 1, depending on whether it was understood by the
all segments in order to extract concepts depending on learner.
the number of concepts. Here, smaller than the number In the concept map tab, it visualizes the concepts from
of concepts are extracted when there are not enough the watched videos as a network graph. In Figure 2,
number of keywords. The extracted concepts includes a Each sky blue node represents the videos that have been
name, URL, and PageRank score. watched, each black node represents the concepts that
were not understood from the video, and red nodes
signify the concepts that were understood. The size of these
concept nodes increases based on the PageRank value
obtained through Wikifier, with higher values resulting
in larger nodes. If diferent videos share the same
concept and link to the same Wikipedia concept, they are
interconnected.</p>
        <p>In the watching tab, it allows users to view the video
they selected from either the new learning tab or the
history tab. In the watching tab, there is the selected
video, segments related to the video, and the concepts
contained within each segment. The number of segments
and concepts displayed reflect the settings made in the
options of Figure 1(A). Each segment contains concepts
up to the set number, and they are displayed in order
of importance. Initially, all concepts are in a ’learnable’
state and are shown in black. As users watch the video
and understand a concept, they can click on it, turning
it red, which signifies a ’learned concept’. The concepts
learned in the Watching tab are reflected in the history
tab and the concept map tab. These learned concepts
are then used as a basis for recommending new learning
videos</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Learning process based on concept maps</title>
      <sec id="sec-4-1">
        <title>In this section, we describe the learning process of on</title>
        <p>line videos based on concept maps. The following user
scenario illustrates a continuous learning process.</p>
        <p>In the video-sharing platform, a user searches for
learning videos relating to keywords of the concept she wants
to learn (e.g., recommender system). The video-sharing
platform retrieves multiple videos. She can select the
ifrst video from them to learn about the concept and
watch the selected video. As shown in Figure 3, she can
interactively mark either learnable or learned concepts
in the middle or at the end of the video she is watching.
Her concept map is incrementally created whenever she
marks either learned or learnable concepts.</p>
        <p>Regardless of whether she watches the video
completely or not, she can select the next videos to
understand learnable or learned concepts. At that time, the
rank of the lecture videos was rearranged according to
the current state of her concept map. Here, the user
settings aforementioned afect the rank of the videos.</p>
        <p>The visualized concept map allows her to check what
she has learned. In addition, the co-occurrence of
concepts in the watched videos is displayed. Moreover, she
can watch the learning video again in her watched
history if she either did not understand some concepts or
re-learn concepts. She searches for keywords of either
learnable or learned concepts to obtain additional
candidates for lecture videos.
1–6</p>
      </sec>
      <sec id="sec-4-2">
        <title>Last, she can perform these learning processes repeatedly until she understands the concept. The learning process terminates when concepts are completely understood by either the system or her intention.</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and future work</title>
      <p>To summarize, we have proposed CONREC, a
recommendation system tailored for exploring learning videos
based on concept maps. First, CONREC has identified
suitable learning videos based on knowledge the user has
learned but also re-ranked them considering his
understanding. Second, CONREC has navigated video content
adaptively, dividing videos into segments and visualizing
concepts as network graphs. Third, we have defined both
learnable and learned concepts and represented the
concept vector as a one-hot representation of concepts. Last,
our graphical interface has ensured users can customize
their learning experiences, enhancing content relevance,
and engagement.</p>
      <p>In the future work, we will improve CONREC that
aligns video recommendations with potential career
paths. Our future work provides learners a personalized
learning path that not only enriches their knowledge but
also enhances their career prospects.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgement</title>
      <sec id="sec-6-1">
        <title>This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government (No.2021R1F1A1050937)</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Prerequisite relation learning for concepts in moocs, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</article-title>
          (Volume
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <year>2017</year>
          , pp.
          <fpage>1447</fpage>
          -
          <lpage>1456</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhong</surname>
          </string-name>
          , G. Luo,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zeng</surname>
          </string-name>
          , et al.,
          <article-title>Mooccubex: a large knowledge-centered repository for adaptive learning in moocs</article-title>
          ,
          <source>in: Proceedings of the 30th ACM International Conference on Information &amp; Knowledge Management</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4643</fpage>
          -
          <lpage>4652</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Madhyastha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lawrence</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rajan</surname>
          </string-name>
          ,
          <article-title>Inferring concept prerequisite relations from online educational resources</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>33</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>9589</fpage>
          -
          <lpage>9594</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Impey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Formanek</surname>
          </string-name>
          ,
          <article-title>Moocs and 100 days of covid: Enrollment surges in massive open online astronomy classes during the coronavirus pandemic</article-title>
          ,
          <source>Social Sciences Humanities Open</source>
          <volume>4</volume>
          (
          <year>2021</year>
          )
          <fpage>100177</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pérez Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dormann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kreitmayer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Noss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shawe-Taylor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rogers</surname>
          </string-name>
          , E. Yilmaz,
          <article-title>Watch less and uncover more: Could navigation tools help users search and explore videos?</article-title>
          ,
          <source>in: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>90</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <article-title>Reinforced moocs concept recommendation in heterogeneous information networks</article-title>
          ,
          <source>ACM Transactions on the Web</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chtouki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Harroud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Khalidi</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Bennani,</surname>
          </string-name>
          <article-title>The impact of youtube videos on the student's learning, in: 2012 international conference on information technology based higher education and training (ITHET)</article-title>
          , IEEE,
          <year>2012</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cuthrell</surname>
          </string-name>
          , Youtube:
          <article-title>Educational potentials and pitfalls</article-title>
          ,
          <source>Computers in the Schools</source>
          <volume>28</volume>
          (
          <year>2011</year>
          )
          <fpage>75</fpage>
          -
          <lpage>85</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Snelson</surname>
          </string-name>
          ,
          <article-title>The benefits and challenges of youtube as an educational resource, The Routledge companion to media education, copyright, and</article-title>
          fair use
          <volume>48</volume>
          (
          <year>2018</year>
          )
          <fpage>109</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>C.-L. Tang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Liao</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-C. Wang</surname>
            ,
            <given-names>C.-Y.</given-names>
          </string-name>
          <string-name>
            <surname>Sung</surname>
          </string-name>
          , W.-C. Lin,
          <article-title>Conceptguide: Supporting online video learning with concept map-based recommendation of learning path</article-title>
          ,
          <source>in: Proceedings of the Web Conference</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>2757</fpage>
          -
          <lpage>2768</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P. G.</given-names>
            <surname>Lange</surname>
          </string-name>
          , Informal Learning on YouTube, John Wiley Sons, Ltd,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strobelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tompkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fredericks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Huf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Higgins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Strezhnev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Komisarchik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>King</surname>
          </string-name>
          , H. Pfister, booc. io:
          <article-title>An education system with hierarchical concept maps and dynamic nonlinear learning plans</article-title>
          ,
          <source>IEEE transactions on visualization and computer graphics 23</source>
          (
          <year>2016</year>
          )
          <fpage>571</fpage>
          -
          <lpage>580</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jawahar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tapaswi</surname>
          </string-name>
          ,
          <article-title>Unsupervised audio-visual lecture segmentation</article-title>
          ,
          <source>in: 2023 IEEE/CVF Winter Conference on Applications of Computer Vision</source>
          (WACV), IEEE,
          <year>2023</year>
          , pp.
          <fpage>5221</fpage>
          -
          <lpage>5230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A novel system for visual navigation of educational videos using multimodal cues</article-title>
          ,
          <source>in: Proceedings of the 25th ACM international conference on Multimedia</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1680</fpage>
          -
          <lpage>1688</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mahapatra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mariappan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roy</surname>
          </string-name>
          , Videoken:
          <article-title>Automatic video summarization and course curation to support learning</article-title>
          ,
          <source>in: Companion Proceedings of the The Web Conference</source>
          <year>2018</year>
          ,
          <year>2018</year>
          , pp.
          <fpage>239</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Brank</surname>
          </string-name>
          , G. Leban,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grobelnik</surname>
          </string-name>
          ,
          <article-title>Annotating documents with relevant wikipedia concepts</article-title>
          ,
          <source>Proceedings of SiKDD 472</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>