<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>T. K. Chau);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Model for a Scholarly Semantic Annotation Platform in Visual Heritage: A Case Study Using the Murten Panorama</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tsz Kin Chau</string-name>
          <email>tszkin.chau@epfl.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Jaquet</string-name>
          <email>daniel.jaquet@epfl.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sarah Kenderdine</string-name>
          <email>sarah.kenderdine@epfl.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Semantic Web Applications for Cultural Heritage, Virtual Research Environment, Semantic Annotation, Panorama,</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cultural Heritage</institution>
          ,
          <addr-line>Digital Art History, Digital history, Ontology, CIDOC-CRM</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Laboratory for Experimental Museology</institution>
          ,
          <addr-line>EPFL</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Historically, the ability to reproduce visual works in print for comparison and detailed illustration has been crucial for art historical research and its dissemination. While this has always been possible, the study of big monolithic visual heritage can be greatly streamlined by an interactive digital research environment. The goal of our research is to develop an annotation platform that leverages linked open data to facilitate a thorough and scholarly description of big monolithic visual heritage. We will deploy our platform to craft a scholarly edition of the Murten Panorama (1894), ofering high-quality, well-provenanced knowledge graph to both the public and scholars, giving them a window into the historical context of the creation of the 19th c. panoramic masterpiece.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>A</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The scholarly semantic annotation platform proposed in this paper is conceived for, though it is not
limited to, the study of big monolithic visual heritage. There are two dimensions to the concept of
big monolithic visual heritage. Firstly, big in extent, either in physical size, exemplified by painted
panoramas or mural paintings, or in digital size, as seen in macroscopic and microscopic images.
Secondly, big in density, signifying visual heritage with rich composition, motifs, decorations, cultural
references, and localized features or narratives, making it a complex and composite image. These two
facets of “big” are interconnected. Certain objects might be small in size but contain dense features,
requiring gigapixel imaging to capture their content fully, thus rendering them digitally big.</p>
      <p>Historically, the ability to reproduce visual works in print for comparison and detailed illustration
has been crucial for art historical research and its dissemination. While this has always been possible,
the study of big monolithic visual heritage can be greatly streamlined by an interactive digital research
environment. Recent advances in gigapixel imaging and the International Image Interoperability
Framework (IIIF) allow researchers to pan and zoom into large images, examining multiple layers of
X-ray and RGB data in real-time. This enables close examination and comparison with other visual
and textual sources. Furthermore, linked open data and ontologies, such as CIDOC-CRM, provide
finegrained descriptions of the knowledge created during the study of visual heritage objects, facilitating
the documentation of processes, theories, and citations that are employed in scholarly researches.</p>
      <p>The goal of our research is to develop an annotation platform that leverages linked open data
to facilitate a thorough and scholarly description of big monolithic visual heritage. There are two
distinctive features in our method. First, we dive deep into the visual material, isolating and describing
local features (also referred to as Point of Interests (POIs)), that make annotation the first-class citizen in
our research. Second, distinct from mainstream annotation tools, we also focus on data trustworthiness
by documenting the scholarly process involved in an annotation.
France</p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>This paper presents parts of our ongoing research, focusing on the development of the data model
for our annotation platform. We begin by introducing the concept of our scholarly semantic annotation
platform, followed by a discussion on the formalization of our annotation data model. This paper
contributes the following: 1) an outline to leverage semantic web technology to enable scholarly
annotation and analysis in visual heritage, and 2) a proposal for the reuse and further development of
existing visual interpretation ontologies to create a standards-compliant ontological data model that
drives a complex knowledge graph management platform application.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Background: Digitising and Augmenting the Panorama of the</title>
    </sec>
    <sec id="sec-4">
      <title>Battle of Murten (DIAGRAM)</title>
      <p>The visual heritage item at the centre of our research, the Murten Panorama (1894) by Louis Braun
(1836-1916) is an illustrative example of visual heritage characterized by both its vast scale and intricate
detail. Depicting the Battle of Murten on June 22, 1476, during the Burgundian Wars fought between
the old Swiss Confederacy and Charles the Bold, the Duke of Burgundy, the Murten Panorama stands
as both a Swiss national treasure and a visual heritage of international significance.</p>
      <p>
        Physically measuring approximately 10 x 100 meters, the panorama was digitized in 2023 at 1,000
dpi and fully processed in June, 2024 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] under the initiative of the DIAGRAM project, creating a 1.6
Terapixels digital twin. To commemorate the 550th anniversary of the Battle of Murten and to advocate
for the panorama’s recognition as a UNESCO Memory of the World, a series of augmented immersive
installations and a scholarly edition of the Murten Panorama will be produced in 2024-2026.
      </p>
      <p>The content within the panorama is exceptionally diverse, encompassing a wide range of named
geographical locations, historical characters, heraldic representations, recognized historical events,
about 5,000 people (including 26 women and 1 child) in various costumes and arms, and 700 horses.
Additionally, it is rich in cultural-historical references, comprising an array of visual elements such as
weapons, flags, costumes, and narratives that can be traced in museum collections, illustrated chronicles,
and historical documents, providing an immense opportunity for linked data annotation.</p>
    </sec>
    <sec id="sec-5">
      <title>3. Related Work</title>
      <p>
        A platform conceptually similar to our conceived platform is Geovistory [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a collaborative, web-based
research and data publication environment. It includes Geovistory Toolbox that allows researchers to
collect, curate, and evaluate data conforming to the methodologies of historical science. The toolbox
ofers strong semantic support from CIDOC-CRM, FRBRoo, and a community of data profiles for
handling diferent types of source materials.
      </p>
      <p>
        While Geovistory primarily focuses on text-based materials, in the visual domain, open-source
annotation libraries that support IIIF images include Mirador Annotations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and Annotorious [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Both libraries comply with the Web Annotation Data Model [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and are widely adopted for online
annotation tools, crowd sourced annotation, as well as for creating guided virtual exhibitions.
      </p>
      <p>
        Several annotation tools are specifically designed to document the deep scholarly context. Pliny was
developed to document the process of scholarly interpretation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Pundit ofers a “Triple Composer”
feature, which allows annotators to describe content in a named graph [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Some tools are tailored for art historians’ analytical needs. HyperImage Virtual Research Environment
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and ARIES (ARt Image Exploration Space) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] provide a light table virtual environment for visual
studies, allowing manipulations such as rearranging, resizing, and comparing images to study visual
relationships. Tropy [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is a desktop-based, open-source personal research image management tool
that supports POI-based image annotation and metadata customization.
      </p>
      <p>
        During our review process, we concluded that while tools like Geovistory meet our scholarly needs,
the lack of visual annotation support renders it unsuitable for our purposes. Alternatively, we identified
ResearchSpace [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], an open-source, template-driven, and highly customizable knowledge graph
management platform built on the CIDOC-CRM framework and its extensions, as particularly well-suited
for the experimental development of our proposed annotation method. Notably, its unique focus on
image annotation knowledge graph authoring makes it an ideal fit for our use case.
      </p>
    </sec>
    <sec id="sec-6">
      <title>4. Development of the Annotation Data Model</title>
      <p>4.1. Goal
To fully benefit from the ontological resources available on ResearchSpace, we aim to express our
annotations using classes and properties from CIDOC-CRM and its extensions. This approach requires
us to develop our data model apart from the widely adopted Web Annotation Data Model. Our model is
designed to encompass not only the content produced by an annotation but also the workflow involved.</p>
      <sec id="sec-6-1">
        <title>4.2. Review of existing ontologies</title>
        <p>
          For annotation specific ontology, the Web Annotation Data Model (WADM) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], is a standard-setting
data model powering countless applications. It is also integrated into important data models for cultural
heritage, such as the IIIF Presentation API and the Europeana Data Model (EDM) [12].
        </p>
        <p>For visual interpretation ontologies, VIR [13] is developed as a CIDOC-CRM extension for describing
the visual recognition process in visual heritage, grounded in visual recognition and communication
theory. Additionally, ICON [14] ofers a fine-grained approach specifically tailored to represent Panofsky’s
three-level iconographical analysis.</p>
        <p>For representing scholarly assertions, particularly in historically-oriented humanities, models such
as Factoid [15], symogih.org [16], and STAR (Structured Assertion Record) [17] provide a common
pattern to structure fact/event extractions. HiCO [18] extends this assertion pattern by introducing
interpretation criterion, which allows for the documentation of interpretative activity involved in
the extraction process. Finally CRMInf [19] provides a framework for describing argumentation and
inference activity.</p>
      </sec>
      <sec id="sec-6-2">
        <title>4.3. Characterizing the Visual Annotation Domain</title>
        <p>Our ontology design process is inspired by established ontology design methods including the SAMOD
methodology, ontology design patterns (ODP) [20], and the middle-out approach [21]. Our formalization
team is composed according to the roles described in the SAMOD methodology [22], namely Ontology
Engineer (OE) and Domain Expert (DE). The team consists of two members:
• Member 1: Serves as both ontology engineer and domain expert in art history, specializing in
hierarchical iconographical analysis.
• Member 2: Represents the domain expert in medieval history, specializing in military material
culture and martial arts.</p>
        <p>From our ground source studies, we identified so far the following motivating scenarios (Table 1).</p>
        <sec id="sec-6-2-1">
          <title>4.3.1. Challenges</title>
          <p>A key challenge in formalizing our data model lies in accommodating the multi-domain nature of the
users on our proposed platform, while our modeling team is limited to two domain experts, which may
not be suficient to generate all motivating scenarios. To address this limitation, we plan to develop the
annotation platform and data model with the community of historians through a series of workshop
sessions. The feedback and additional case studies gathered during these sessions will be used to
evaluate and refine our annotation data model.</p>
        </sec>
      </sec>
      <sec id="sec-6-3">
        <title>4.4. Result from the first iteration</title>
        <sec id="sec-6-3-1">
          <title>4.4.1. Case study</title>
          <p>The case study presented in this section is an example extracted from the motivating scenario:
imageimage comparative annotation. Table 2 provides a detailed description of the scenario. This case was
selected for discussion because of its complexity and its significant contribution to the development of
our current annotation data model.</p>
          <p>Upon receiving an image that appears visually connected to the annotation
subject, the first step is to perform source criticism. This involves evaluating
the image’s temporal extent, creator, provenance, and other relevant details.</p>
          <p>Afterward, describe the visual relationship between the image and the
annotation subject. Even if direct evidence of an influence cannot be found, we
assume these images are linked through a visual transmission process, similar
to what is seen in manuscript transmission, where prototypes or
representations are transmitted through both tangible and intangible pathways. Our goal
is to capture and describe the deep visual relationships between images in our
system, enabling future analysis of how the prototype evolves over time.</p>
          <p>What are the changes over time for a particular representation?
Competency questions (CQ)</p>
          <p>Developed through the collaboration of both the OE and DE, a sample annotation presented as a
knowledge graph mock-up is shown in Fig. 2. The three selected images/image segments include
an illustration from the Berner Schilling Chronicle (15th c., Mss.h.h.I.3, Burgerbibliothek Bern), an
illustration from the Werner Schodoler Chronicle (16th c., ZF 18, Aargauer Kantonsbibliothek), and
a segment from the Murten Panorama (19th c.). All three depict the same scene: a group of women
afiliated to the Burgundians being spared by Swiss soldiers during the Battle of Murten. These depictions
difer in detail, with the most notable variation being the portrayal of the women’s role. In the former
two, a woman is shown negotiating with the Swiss soldiers. In contrast, the 19th c. Murten Panorama
depicts her as passively protected by a Swiss soldier.</p>
        </sec>
        <sec id="sec-6-3-2">
          <title>4.4.2. Takeaway from the case study</title>
          <p>The first takeaway is the layers of knowledge produced in the image-to-image comparative annotation
process. The first layer of knowledge involves the arbitrary selection of an image region as the boundary
for a visual interpretation. We refer to this process as “framing”, and the selected region itself as a
“frame”, borrowing terminology from photography. Framing is a critical scholarly process. As illustrated
in the mock-up, the original frame provided by Member 2 (annotation #1) omitted a part of the image,
which could lead to significant changes in subsequent visual recognition.</p>
          <p>The second layer of knowledge produced involves associating a representation with a frame through
visual recognition. In the example, the primary representation is “Traveling women who identify
themselves as such are spared”. To deeply annotate the visual relationship, a frame can be subdivided
into subframes, each associated with a subframe representation that contributes to the overall meaning
of the parent frame’s representation. Such methodology is widely adopted in icongraphical method and
is formalized in VIR and ICON.</p>
          <p>The third layer of knowledge produced is the arbitrary appellation assigned to a representation
afected by the annotator’s domain knowledge and perspective.</p>
          <p>The second takeaway is scholarly assertion justification by referencing external sources or knowledge.
While some frames are spontaneously observable, other frames require external knowledge, often by
using another frame as a reference. This is exemplified by both the Berner Schilling Chronicle and the
Murten Panorama. Another example of external knowledge use is in naming the primary representation,
which is informed by the editor’s caption.</p>
        </sec>
        <sec id="sec-6-3-3">
          <title>4.4.3. Representating Visual Recognition and Hierarchical Analysis</title>
          <p>Our visual recognition model follows the CIDOC-CRM pattern of E24 Physical Human-Made Thing →
P65 shows visual item → E36 Visual Item. Initially, we modeled our case study according to the VIR
ontology. We chose VIR over the more fine-grained ICON ontology due to its domain-neutral design,
which better aligns with our use case.</p>
          <p>When applying the VIR ontology (Fig. 3) to our case study, two specific issues arise. First, VIR’s
formalization of the relationship between the physical visual heritage item (crm:E22) and its subregion
(vir:IC1) does not align with our concept of framing and the frame. Both vir:IC1 and its superclass,
crm:E25 Human-Made Feature, define the feature region as “purposely created by human activity”,
which does not capture the that a frame is an intentional product of the annotator.</p>
          <p>The second issue concerns the relationship between vir:IC9 Representation and vir:IC10 Attribute.
In our case study, the main representation “Traveling women who identify themselves as such are
spared” is assigned to vir:IC9 while subframe representations, such as “woman negotiating with
Swiss soldier” are assigned to vir:IC10. In VIR, a representation inherently incorporates its subframe
representations. However, since the subframe representations difer across the three instances in our
case study, it becomes impossible to aggregate them under a common vir:IC9.</p>
        </sec>
        <sec id="sec-6-3-4">
          <title>4.4.4. Representing scholarly assertion</title>
          <p>At this stage, we have identified three classes of scholarly assertion which are summarized in Table 3,
along with their definitions and associated details.</p>
          <p>The three classes of scholarly assertion are formalized as subclasses of :Scholarly_Assertion, an
adaptation of HiCO’s hico:InterpretationAct (Fig. 5) utilizing crm:E13. hico:InterpretationAct
documents the provenance of statement-extracting (such as actor roles in historical events) scholarly
hermeneutical activities. It is lightweight yet adequate to capture holistically the various aspects of an
interpretation act, including sources from which the statement is extracted, citations, interpretation
methods, and relationships to other interpretation acts.</p>
          <p>Although crm:E13 and hico:InterpretationAct connect to asserted statements
diferently, with crm:E13 connecting directly to the statement triple through its properties and
hico:InterpretationAct connecting indirectly to the generated statements via a prov:Entity node,
the documentation properties of hico:InterpretationAct can be suficiently represented using
crm:E13 (Table 4).</p>
          <p>Our modification to HiCO involves using a reification node crm:PC16 to connect sources and external
knowledge to their respective assertion criterion. This adjustment accommodates the edge case that a
scholarly assertion utilizes multiple sources or pieces of external knowledge. In our case study, to
represent the reference of another frame in an :Framing activity, we use crm:PC16→crm:P02_has_range
to link to the reference :Frame and crm:PC16→crm:P16.1_mode_of_use to an interpretation criterion
vocabulary (crm:E55), such as “infer from another frame”. In an edge-case scenario, we want to further
support our assertion by citing a historical document that describes our observed frame as a single
unit. We will need a second pair of “Source / External Knowledge” and ”Assertion Criterion”, and only
through the reified crm:PC16 that such pairing can be structurally ensured.</p>
          <p>The scholarly assertion pattern plays a pivotal role in aggregating knowledge in a multi-domain
annotation contexts. In our visual recognition component, we do not make any domain-specific
assumptions about the :Recognized_Visualitem. For instance, a sword could be the simplest
recognition or a pre-iconographical recognition that contributes to a complex visual symbol. In
our proposed formalization, domain knowledge can be expressed through vocabulary by using
:Visual_Recognition→crm:P2_has_type to characterize a :Recognized_Visualitem, for example,
as “pre-iconographical” (crm:E55). In this formalization, we aim to aggregate knowledge from diferent
domain annotation using the domain-neutral :Recognized_Visualitem node.</p>
          <p>We tested the model using SPARQL (appendix A) to answer the competency question: “What is the
extended recognition from other domains for a particular frame?” and obtained the expected results.</p>
        </sec>
      </sec>
      <sec id="sec-6-4">
        <title>4.5. Limitation and discussion</title>
        <p>Our proposed :Recognized_Visualitem serves as the aggregating node to connect the historical
instances of :Frame that embody it. Structurally, :Recognized_Visualitem should be mapped to
the level of crm:E89/frbroo:F1 (frbroo is now LRMoo [24]), which is defined as the common idea
and underlying prototype (crm:E89) that evolves over time (frbroo:F1), while crm:E36, a subclass of
crm:E73, is already realized as an identifiable immaterial item.</p>
        <p>Mapping :Recognized_Visualitem to frbroo:F1 may remove its visual quality and turn it into
modal-neutral which can then serve as cross-modal aggregation, such as the textual mention of
“Traveling women who identify themselves as such are spared”. Let’s call this class the
:Crossmodal_Recognized_Item.</p>
        <p>However, :Cross-modal_Recognized_Item requires a new property other than crm:P65 for
connecting a :Frame. One possible option would be :Frame→crm:P130_shows_features_of→frbroo:F1.
Yet, in CRM-base, crm:P130 refers to the generalization of the notions of “copy of” and
“similar to”, which might not be a perfect match. A more formal path would be
:Frame→crm:P128_carries→frbroo:F2→frbroo:R3_realises→frbroo:F1, together with a
shortcut property to bypass documenting the frbroo:F2 level, following the pattern from crm:P62.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>5. Conclusion and outlook</title>
      <p>We have demonstrated our motivation, design concept, and work-in-progress data model for a scholarly
semantic annotation platform. We believe that our proposed method will contribute to art-historical
research, GLAM data curation, and scholarly editing across various types of visual heritage.</p>
      <p>Looking ahead, we will continue to expand our use case examples through workshop sessions and
expand our annotation data model with additional features. In the next iteration, in particular, we will
explore how to enable argumentation among scholarly assertions, with the potential integration of
CiTO [25] or CRMInf. We will also explore the impact of AI, such as object detection, on our data model,
which could become a key feature in a human-in-the-loop, AI-assisted annotation environment.</p>
      <p>We are actively developing the annotation platform to deploy our proposed data model and designing
the corresponding workflows and user interface. Additionally, we have customized ResearchSpace’s
image annotation feature to support our recursive framing methodology (Fig. 6). Users of our system
can link digitized images from the vast IIIF resources available on the web.</p>
      <p>We will deploy our platform to craft a scholarly edition of the Murten Panorama, ofering high-quality,
well-provenanced knowledge graph to both the public and scholars, giving them a window into the
historical context of the creation of the 19th c. panoramic masterpiece.
Practices in ResearchSpace, in: D. Vrandečić, K. Bontcheva, M. C. Suárez-Figueroa, V. Presutti,
I. Celino, M. Sabou, L.-A. Kafee, E. Simperl (Eds.), The Semantic Web – ISWC 2018, volume 11137,
Springer International Publishing, Cham, 2018, pp. 325–340. URL: https://link.springer.com/10.
1007/978-3-030-00668-6_20. doi:10.1007/978- 3- 030- 00668- 6_20, series Title: Lecture Notes in
Computer Science.
[12] M. Doerr, S. Gradmann, S. Hennicke, A. Isaac, C. Meghini, H. Sompel, The Europeana Data Model
(EDM), 2010, pp. 10–15.
[13] N. Carboni, L. de Luca, An Ontological Approach to the Description of Visual and Iconographical
Representations, Heritage 2 (2019) 1191–1210. URL: https://www.mdpi.com/2571-9408/2/2/78.
doi:10.3390/heritage2020078, number: 2 Publisher: Multidisciplinary Digital Publishing
Institute.
[14] B. Sartini, S. Baroncini, M. Van Erp, F. Tomasi, A. Gangemi, ICON: An Ontology for Comprehensive
Artistic Interpretations, Journal on Computing and Cultural Heritage 16 (2023) 1–38. URL: https:
//dl.acm.org/doi/10.1145/3594724. doi:10.1145/3594724.
[15] M. Pasin, J. Bradley, Factoid-based prosopography and computer ontologies: towards an integrated
approach, Digital Scholarship in the Humanities 30 (2015) 86–97. doi:10.1093/llc/fqt037.
[16] F. Beretta, D. Ferhod, S. Gedzelman, P. Vernus, The SyMoGIH project : publishing and sharing
historical data on the semantic web, EPFL, Lausanne / UNIL, Lausanne, 2014, p. 469. URL:
https://shs.hal.science/halshs-01097399.
[17] T. Andrews, The STructured Assertion Record (STAR) Model for Event-based Representation of</p>
      <p>Historical Information:, Mainz, Germany, 2023.
[18] M. Daquino, F. Tomasi, Historical Context Ontology (HiCO): A Conceptual Model for Describing</p>
      <p>Context Information of Cultural Heritage Objects, 2015. doi:10.1007/978- 3- 319- 24129- 6_37.
[19] M. Doerr, C.-E. Ore, P. Fafalios, A. Kritsotaki, S. Stead, Definition of the CRMinf. An Extension of</p>
      <p>CIDOC-CRM to support argumentation (2023).
[20] A. Gangemi, V. Presutti, Ontology Design Patterns, in: S. Staab, R. Studer (Eds.), Handbook on
Ontologies, International Handbooks on Information Systems, Springer, Berlin, Heidelberg, 2009, pp.
221–243. URL: https://doi.org/10.1007/978-3-540-92673-3_10. doi:10.1007/978- 3- 540- 92673- 3_
10.
[21] M. E. Ghosh, H. Naja, H. Abdulrab, M. Khalil, Towards a Middle-out Approach for Building Legal
Domain Reference Ontology, International Journal of Knowledge Engineering 2 (2016) 109–114.</p>
      <p>URL: http://www.ijke.org/show-43-105-1.html. doi:10.18178/ijke.2016.2.3.063.
[22] S. Peroni, SAMOD: an agile methodology for the development of ontologies (2016).</p>
      <p>URL: https://figshare.com/articles/journal_contribution/SAMOD_an_agile_methodology_for_the_
development_of_ontologies/3189769. doi:10.6084/M9.FIGSHARE.3189769.
[23] T. K. Chau, D. N. Jaquet, S. Kenderdine, Augmentation of the Panorama of the Battle of Murten
(1893) - Experimental Annotation System for the World Largest Digital Image of a Single
Object, University of Zurich, 2024. URL: https://zenodo.org/records/10731039. doi:10.5281/zenodo.
10731039.
[24] C. Bekiari, M. Doerr, P. le Boeuf, P. Riva, LRMOO object-oriented definition and mapping from the</p>
      <p>IFLA Library Reference Model, 2024.
[25] D. Shotton, CiTO, the Citation Typing Ontology, Journal of Biomedical Semantics 1 (2010)
S6. URL: http://jbiomedsem.biomedcentral.com/articles/10.1186/2041-1480-1-S1-S6. doi:10.1186/
2041- 1480- 1- S1- S6.</p>
    </sec>
    <sec id="sec-8">
      <title>A. Online Resources</title>
      <p>The data model development repository is available via GitHub.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bourke</surname>
          </string-name>
          ,
          <source>The TERAPIXEL Panorama Project</source>
          ,
          <year>2024</year>
          . URL: https://www.epfl.ch/labs/emplus/ projects/murten-panorama
          <article-title>-digital-twin-scanning-project-the-making-of/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Alamercery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Beretta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.-J.</given-names>
            <surname>Favey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferhod</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Knecht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Muck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perraud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stebler</surname>
          </string-name>
          ,
          <article-title>Open Research Practices with the OntoME-Geovistory environment</article-title>
          ,
          <year>2023</year>
          . URL: https://shs.hal.science/halshs-04162294. doi:
          <volume>10</volume>
          .5281/zenodo.8107384.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] ProjectMirador, mirador-annotations,
          <year>2015</year>
          . URL: https://github.com/ProjectMirador/ mirador-annotations, original-date:
          <fpage>2020</fpage>
          -
          <lpage>05</lpage>
          -06T12:
          <fpage>54</fpage>
          :
          <fpage>15Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Simon</surname>
          </string-name>
          , annotorious-openseadragon,
          <year>2013</year>
          . URL: https://github.com/annotorious/ annotorious-openseadragon.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sanderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ciccarese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <source>Web Annotation Data Model</source>
          ,
          <year>2017</year>
          . URL: https://www.w3. org/TR/annotation-model/.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Bradley</surname>
          </string-name>
          ,
          <article-title>Pliny: A model for digital support of scholarship</article-title>
          ,
          <source>Journal of Digital Information</source>
          <volume>9</volume>
          (
          <year>2008</year>
          ). URL: https://jodi-ojs-tdl.tdl.org/jodi/index.php/jodi/article/view/209, number:
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Grassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Morbidoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fonda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Piazza</surname>
          </string-name>
          ,
          <article-title>Pundit: augmenting web contents with semantics</article-title>
          ,
          <source>Literary and Linguistic Computing</source>
          <volume>28</volume>
          (
          <year>2013</year>
          )
          <fpage>640</fpage>
          -
          <lpage>659</lpage>
          . URL: https://academic.oup.com/ dsh/article-lookup/doi/10.1093/llc/fqt060. doi:
          <volume>10</volume>
          .1093/llc/fqt060.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>J.-M. Loebel</surname>
          </string-name>
          , H.-G. Kuper, HyperImage: Of Layers, Labels and Links., Riga, Latvia,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Crissaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Wood</given-names>
            <surname>Ruby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Deutch</surname>
          </string-name>
          , R. L. DuBois, J.
          <string-name>
            <surname>-D. Fekete</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Freire</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Silva, ARIES: Enabling Visual Exploration and Organization of Art Image Collections</article-title>
          ,
          <source>IEEE Computer Graphics and Applications</source>
          <volume>38</volume>
          (
          <year>2018</year>
          )
          <fpage>91</fpage>
          -
          <lpage>108</lpage>
          . URL: https://ieeexplore.ieee.org/document/8059795/. doi:
          <volume>10</volume>
          .1109/
          <string-name>
            <surname>MCG</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <volume>377152546</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Robertson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mullen</surname>
          </string-name>
          , Tropy: A Tool for Research Photo Management,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Oldman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tanase</surname>
          </string-name>
          ,
          <article-title>Reshaping the Knowledge Graph by Connecting Researchers, Data and</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>