<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Heraklion, Greece
$ beetz@cs.uni-bremen.de (M. Beetz); cimiano@techfak.uni-bielefeld.de (P. Cimiano); michaela.kuempel@uni-bremen.de
(M. Kümpel); enrico.motta@open.ac.uk (E. Motta); i.tiddi@vu.nl (I. Tiddi); jtoeberg@techfak.uni-bielefeld.de (J. Töberg)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Transforming Web Knowledge into Actionable Knowledge Graphs for Robot Manipulation Tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Beetz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philipp Cimiano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michaela Kümpel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrico Motta</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ilaria Tiddi</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan-Philipp Töberg</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cluster of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University</institution>
          ,
          <addr-line>Bielefeld</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute for Artificial Intelligence, University of Bremen</institution>
          ,
          <addr-line>Bremen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Knowledge Media Institute, The Open University</institution>
          ,
          <addr-line>Milton Keynes</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Knowledge Representation and Reasoning Group, Vrije Universiteit Amsterdam</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>One of the visions in AI based robotics are household robots that can autonomously handle a variety of meal preparation tasks. Based on this scenario, we present a best practice tutorial on how to create actionable knowledge graphs that a robot can use for execution of task variations of cutting actions. We implemented a solution for this task that integrates all necessary software components in the framework of the robot control process. In the context of this tutorial, we focus on knowledge acquisition, knowledge representation and reasoning, and simulating robot action execution, bringing these components together into a learning environment that - in the extended version - introduces the whole control process of Cognitive Robotics. In particular, the Tutorial will detail necessary concepts a knowledge graph should include for robot action execution, how web knowledge can be automatically acquired for the domain of cutting fruits, and how the created knowledge graph can be used to let robots execute tasks like slicing a cucumber or quartering an apple. The learning environment follows an immersive approach, using a physics-based simulation environment for visualization purposes that helps to illustrate the concepts taught in the tutorial. Tutorial ressource: https://github.com/Food-Ninja/Tutorial_ESWC_HHAI</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Knowledge Representation</kwd>
        <kwd>Cognitive Robotics</kwd>
        <kwd>Web Knowledge</kwd>
        <kwd>Actionable Knowledge</kwd>
        <kwd>Knowledge Extraction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        We envision household robots that can be placed in any kitchen to then be given a random recipe
from the Web that they can understand and parse into action plans that can be broken down into
executable body motions that can be performed with available objects in the environment. For this,
robots need to be enabled to perform meal preparation tasks with any tool, on any available object
and for a variation of tasks. This tutorial is based on prior research that proposed a methodology for
creating actionable knowledge graphs [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], where a solution for creating knowledge graphs that link
object to action and environment information and thus make them actionable is proposed, as well as a
knowledge engineering methodology that is more specifically aligned to creating ontologies for meal
preparation tasks that can be used to parameterise robot action plans in order to perform task variations
of cutting actions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>There has been lots of research on creation of knowledge graphs, which has led to many domain
knowledge graphs that have proven to be good in answering questions. Usually, these knowledge
graphs contain object information (e.g. about food objects, recipes, people, books). To make such
knowledge graphs actionable, it is important to link the contained object knowledge to environment
knowledge. If robots shall use the knowledge graphs for action execution, they need to further include
action knowledge.</p>
      <p>This implies that actionable knowledge graphs do not aim at perfectly modeling object knowledge,
but instead focus on reuse of existing knowledge sources and modeling and linking of environment and
action knowledge in order for making the contained knowledge applicable in agent applications. This
tutorial will detail the necessary concepts for creating an actionable knowledge graph for the example
domain of Cutting Fruits and Vegetables, which shall be used by robotic agents to be able to infer the
correct body motions for quartering an apple or dicing a cucumber.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Structure of the Tutorial</title>
      <p>
        The tutorial is centered around the knowledge engineering methodology introduced in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and its
application on the exemplary task of Cutting Fruits &amp; Vegetables. In general, the methodology consists
of five steps to create actionable knowledge graphs that a robot can employ to handle manipulation
tasks, as can be examined in Figure 1. In the following we present a brief summarisation of these steps:
1) Defining Motion Parameters : Definition of the domain- and action-dependent parameters
influencing the execution of the target manipulation action. An example is the knife position for
cutting tasks.
2) Collecting Knowledge Sources: Collection of diferent sources for three types of knowledge:
action knowledge, object knowledge &amp; knowledge for linking action and object knowledge
3a) Extraction of Action Groups &amp; Afordances : Collect information about the manipulation
action and its associated synonyms and hyponyms. This information is used to organize diferent
action verbs into groups based on similarities in their motion parameters. For each so called
action group, a representative is chosen and their afordances are created.
3b) Extraction of Object Knowledge &amp; Dispositions: Collect information about objects
participating in the manipulation action (e.g. tools, environments, targets). Then collect information
and concrete values for the task-specific object properties that influence the action execution.
      </p>
      <p>
        This knowledge is represented through dispositions.
4) Relate Object to Action Knowledge: Relate the action afordances to the object dispositions in
an ontology by re-using relations from the SOMA [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] ontology.
5) Link to Cognitive Architecture: Map concepts in the generalized manipulation plan to their
representation in the ontology and use the architecture’s perception system to ground objects
and their properties.
      </p>
      <p>In this tutorial we present the whole methodology but focus on the steps 1), 3) and 4), which represent
the knowledge collection and extraction from (Semantic) Web resources.</p>
      <sec id="sec-2-1">
        <title>2.1. Defining Motion Parameters</title>
        <p>
          In order to create an actionable knowledge graph for the domain of cutting fruits and vegetables, we
ifrst have to investigate motion parameters that influence action execution. For this, one can first
investigate a lexical resource like WordNet [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] to find commonly used synonyms of cutting, such as
slicing, dicing, or halving.
        </p>
        <p>We then investigate how diferent action verbs influence task execution, which results in the following
motion parameters:
- number of repetitions: Cutting tasks vary in the number of repetitions to be executed. Sometimes,
a cut is only performed once, while other tasks require to cut the whole object.
- cutting position: Cutting tasks also vary in the applied cutting position. Halving requires a
diferent position than slicing, for example.
- result object: Cutting tasks result in objects of diferent amount and shape.
- prior actions: Some objects require a prior action (such as peeling) to be executed.
- dependent tasks: Some tasks depend on prior tasks (i.e. quartering depends on halving).</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Extraction of Relevant Action Knowledge from the Web</title>
        <p>The relevant action knowledge we focus on consists of the diferent verbs that are associated with the
manipulation action. This includes the main verb (e.g. cut) as well as all of its hyponyms and synonyms.
Additionally, action knowledge covers the properties of the diferent verbs that distinguish their action
execution and generally influence the manipulation action.</p>
        <p>
          In the tutorial we showcase the action knowledge extraction for the exemplary task of Cutting. We
begin by extracting all synonyms and hyponyms from WordNet [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and VerbNet [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], two expertly
created resources for lexical information and verb usage. For the verb cut, we extract 211 verbs from
WordNet and 147 verbs from VerbNet. After pre-processing and duplicate removal, 181 verbs remain.
These remaining verbs are then filtered based on their relevance for the domain using an
instructionfocused corpus from WikiHow. We set a threshold of 100 occurrences in a specific part of an article
across the whole corpus to warrant an inclusion of the verb in future steps. With this restriction, only
46 verbs remain. However, there is still a need for manual post-processing since some important verbs
are missing (e.g. halve or quarter) or are very general and thus not relevant for cutting (e.g. make or
pull).
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Extraction of Relevant Object Knowledge from the Web</title>
        <p>
          For the object knowledge, we focus on information about objects involved in the manipulation action,
their properties, usage and their specific purpose. In general we showcase a similar pipeline to the one
explained in Section 2.2. We begin by extracting all relevant objects from domain-specicfi taxonomies.
For our focus on fruits and vegetables, we query the FoodOn [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] using SPARQL, resulting in 257 unique
fruits and 31 unique vegetables. Since not all of these fruits and vegetables are equally relevant and we
need enough information to exist to evaluate their task-specific properties, we again use
instructionfocused corpora to filter them based on their occurrence data. In this case we also look at the recipe
corpus Recipe1M+ [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and only include fruits and vegetables that occur in 1% of any part of these two
corpora. This filtering step results in 15 remaining fruits and one remaining vegetables.
        </p>
        <p>
          Lastly, we present our ongoing eforts in automating the extraction of task-specific object property
values. For this, we compare three diferent pre-trained embeddings (GloVe [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], NASARI [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] and
ConceptNet Numberbatch [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]), two large language models (ChatGPT and GPT-4) as well as two
techniques for extracting this information from the Recipe1M+ on the task of extracting the existing
anatomical parts for a given fruit. Our preliminary results and their condition can be examined in Table 1.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Linking Action to Object Knowledge in the Ontology</title>
        <p>
          For connecting and linking the action to the object knowledge, we rely on the concepts of disposition
and afordance . In general, a disposition describes the property of an object, thereby enabling an agent
to perform a certain task [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] as in a knife can be used for cutting, whereas an afordance describes what
an object or the environment ofers an agent [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] as in an apple afords to be cut .
        </p>
        <p>
          In recent works like SOMA [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], both concepts are set in relation by stating that dispositions allow
objects to participate in events realizing afordances, which are more abstract descriptions of dispositions.
This is achieved in the TBOX by using the affordsTask, affordsTrigger and hasDisposition
relations from SOMA. An example for the disposition of Peelability can be examined in Section 2.4.
hasDisposition 
( 
 (afordsTask
 (afordsTrigger
  )
        </p>
        <p>(   )))</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Tutorial Material</title>
      <p>For the tutorial, we made our implementation available in Jupyter Notebooks found in a GitHub
repository1. Participants are encouraged to download the notebooks and follow along, but the notebooks
are presented in depth during the talks, so actual hands-on experience is optional.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The tutorial is organized by the SAIL Network in collaboration with the Joint Research Center on
Cooperative and Cognition-enabled AI (CoAI JRC). The research towards this Tutorial has been
partially supported by the German Federal Ministy of Education and Research; Project-ID 16DHBKI047
“IntEL4CoRo - Integrated Learning Environment for Cognitive Robotics”, University of Bremen as well
as the German Research Foundation DFG, as part of CRC (SFB) 1320 “EASE - Everyday Activity Science
and Engineering”, University of Bremen (http://www.ease-crc.org/). The research was conducted in
subproject R04 “Cognition-enabled execution of everyday actions”.
1https://github.com/Food-Ninja/Tutorial_ESWC_HHAI</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kümpel</surname>
          </string-name>
          ,
          <article-title>Actionable knowledge graphs - how daily activity applications can benefit from embodied web knowledge</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .26092/elib/2936.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kümpel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-P.</given-names>
            <surname>Töberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Hassouna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cimiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          ,
          <article-title>Towards a Knowledge Engineering Methodology for Flexible Robot Manipulation in Everyday Tasks</article-title>
          , in: International Workshop on
          <article-title>Actionable Knowledge Representation and Reasoning for Robots (AKR3), Heraklion</article-title>
          , Crete, Greece,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Beßler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Porzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pomarlan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vyas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Höfner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Malaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bateman</surname>
          </string-name>
          ,
          <article-title>Foundations of the Socio-physical Model of Activities (SOMA) for Autonomous Robotic Agents</article-title>
          ,
          <source>in: Formal Ontology in Information Systems</source>
          , volume
          <volume>344</volume>
          <source>of Frontiers in Artificial Intelligence and Applications</source>
          , IOS Press, Amsterdam,
          <year>2022</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>174</lpage>
          . URL: https://ebooks.iospress.nl/doi/10.3233/FAIA210379. arXiv:
          <year>2011</year>
          .11972.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>WordNet: A Lexical Database for English</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>38</volume>
          (
          <year>1995</year>
          )
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          . doi:
          <volume>10</volume>
          .1145/219717.219748.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Schuler</surname>
          </string-name>
          , VerbNet:
          <string-name>
            <given-names>A</given-names>
            <surname>Broad-Coverage</surname>
          </string-name>
          , Comprehensive Verb Lexicon,
          <source>Ph.D. thesis</source>
          , University of Pennsylvania,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Dooley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Grifiths</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Gosal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Buttigieg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hoehndorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Schriml</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. S. L.</given-names>
            <surname>Brinkman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. W. L.</given-names>
            <surname>Hsiao</surname>
          </string-name>
          ,
          <article-title>FoodOn: A harmonized food ontology to increase global food traceability, quality control and data integration</article-title>
          ,
          <source>npj Sci Food</source>
          <volume>2</volume>
          (
          <year>2018</year>
          )
          <article-title>23</article-title>
          . doi:
          <volume>10</volume>
          .1038/ s41538-018-0032-6.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Marín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ofli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hynes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Salvador</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Aytar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Weber</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Torralba,</surname>
          </string-name>
          <article-title>Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>43</volume>
          (
          <year>2021</year>
          )
          <fpage>187</fpage>
          -
          <lpage>203</lpage>
          . doi:
          <volume>10</volume>
          .1109/ TPAMI.
          <year>2019</year>
          .
          <volume>2927476</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Glove:
          <article-title>Global Vectors for Word Representation</article-title>
          ,
          <source>in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Doha, Qatar,
          <year>2014</year>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          . doi:
          <volume>10</volume>
          .3115/v1/
          <fpage>D14</fpage>
          -1162.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Camacho-Collados</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Pilehvar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          ,
          <article-title>NASARI: A Novel Approach to a SemanticallyAware Representation of Items, in: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</article-title>
          , Denver, CO,
          <year>2015</year>
          , pp.
          <fpage>567</fpage>
          -
          <lpage>577</lpage>
          . URL: http://aclweb.org/anthology/N/N15/N15-1059.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Speer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chin</surname>
          </string-name>
          , C. Havasi, ConceptNet
          <volume>5</volume>
          .
          <article-title>5: An Open Multilingual Graph of General Knowledge</article-title>
          , AAAI
          <volume>31</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1609/aaai.v31i1.
          <fpage>11164</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Turvey</surname>
          </string-name>
          ,
          <article-title>Ecological foundations of cognition: Invariants of perception and action</article-title>
          ., in: H. L.
          <string-name>
            <surname>Pick</surname>
            , P. W. van den Broek, D. C. Knill (Eds.), Cognition: Conceptual and
            <given-names>Methodological</given-names>
          </string-name>
          <string-name>
            <surname>Issues</surname>
          </string-name>
          ., American Psychological Association, Washington,
          <year>1992</year>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>117</lpage>
          . doi:
          <volume>10</volume>
          .1037/
          <fpage>10564</fpage>
          -
          <lpage>004</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Bornstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Gibson</surname>
          </string-name>
          , The Ecological Approach to Visual Perception,
          <source>The Journal of Aesthetics and Art Criticism</source>
          <volume>39</volume>
          (
          <year>1980</year>
          )
          <article-title>203</article-title>
          . doi:
          <volume>10</volume>
          .2307/429816. arXiv:
          <volume>10</volume>
          .2307/429816.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>