<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda,
T. Scialom, Toolformer: Language models can teach themselves to use tools, Advances in Neural
Information Processing Systems</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3503161.3548112</article-id>
      <title-group>
        <article-title>An Integrated System for Interacting with Multi-Page Scholarly Documents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lorenzo Massai</string-name>
          <email>lorenzo.massai@unifi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simone Marinai</string-name>
          <email>simone.marinai@unifi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DINFO - University of Florence</institution>
          ,
          <addr-line>via S. Marta, 3, Firenze</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>36</volume>
      <issue>2024</issue>
      <fpage>70</fpage>
      <lpage>85</lpage>
      <abstract>
        <p>In this work we present a preliminary version of a comprehensive interface for supporting users to interact with scholarly documents, enabling multi-layered exploration and ofering deeper insights by integrating diverse features and contextual information. By bridging diverse information our work pursues the identification, characterization, and linking of visual elements to semantic and context data, leveraging large language models for interoperability. Recent advances in retrieval augmented generation are also exploited to address some language models limitations, allowing them to access latent information from document representations such as graph and vector embeddings. The system under development performs an analysis of input documents and enables the extraction of visual and semantic features, making them accessible in a comprehensive framework. The association of structural information to visual data allows formal analysis of documents and is exploited in our model to enhance visual extraction, performing a novel ontology-based constraint violation detection. The information extracted through this framework is semantically explorable, providing access to the document structure, which can be exploited in many applications like question answering and document understanding.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Natural language processing</kwd>
        <kwd>document layout analysis</kwd>
        <kwd>conversational agents</kwd>
        <kwd>retrieval augmented generation</kwd>
        <kwd>large language models</kwd>
        <kwd>question answering</kwd>
        <kwd>document understanding</kwd>
        <kwd>linked data</kwd>
        <kwd>scholarly document processing</kwd>
        <kwd>multi-modal feature extraction</kwd>
        <kwd>text mining</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In recent years digital analysis of documents has gained attention due to the massive process of online
media publishing and to the large availability of shared knowledge. Narrowing the field to the scientific
literature context, readers are able to understand document meaning exploiting diferent kinds of
contextual information like layout information and geometric properties of the elements which come
along with text. The production of scientific literature is typically shared with unstructured media like
PDF or images and getting automatic access to diferent kinds of knowledge requires to make several
disjoint queries, linking data from diferent sources and keeping the original context at the same time.</p>
      <p>This paper aims at extending the research field of Visual Document Understanding (VDU) in the
scientific literature domain through the association of text semantics to visual features, merging them
in a shared structure which allows multi-modal exploration. The main challenges which are addressed
in this work can be found in the following areas.</p>
      <p>Semantic segmentation. The association of semantics to visual data is a key research problem in
computer vision, including tasks like object recognition, image captioning, and image segmentation.
In document analysis the goal is to understand contents by extracting geometric properties of visual
elements such as figures , tables, text and layout elements such as columns, footnotes, titles, classifying
them into semantic categories. Most document understanding systems are limited to text blocks and
ifgure/text classification, lacking contextual information and domain-specific recognition (i.e. listings,
formulas, and chemical structures). The scientific literature, with its variety of visual and text data,
is particularly suitable for semantic segmentation and can be used to estimate associations between
representations. However, these representations are limited by their lack of interoperability and
moreover relying on images restricts analysis to one page. When considering the whole document,
the problem becomes much more complex since inter-page relations and semantic regions spanning
through multiple pages must be considered.</p>
      <p>Semantic integration. The integration of diferent document attributes can be pursued merging
extracted information in shared structures, either for retrieving information about visual and layout
elements in the document, or to get context information about the publication such as the author and
the research field. The presence of a formal structure helps maintain and extend context; relations can
also be exploited to identify structural constraint violations like overlapping layout regions and to
allow category-based searches. To achieve such awareness multiple layers of the same data have to be
considered and an exhaustive semantic characterization of the entities is necessary.</p>
      <p>Layer interoperability. Recent trends for navigating diferent layers of information go towards
intelligent agents which are aware of the subject being asked of, its context, and the context of who
asks. Such agents are able to understand questions and to provide coherent answers spanning through
diferent layers of information, adapting solutions and recommendations as the conversation evolves
and learning what to say also from the dialogue. Visual Question Answering systems’ dependency
on document images reflects in limited awareness of the whole document and lack of any contextual
information or specific domain recognition. However, in real scenarios documents are mostly composed
of multiple pages that should be processed altogether. One of the goals of this work is to link diferent
layers of information to visual media across the whole document and make them interoperable through
conversational agents.</p>
      <p>This paper presents original contributions that advance the fields of analysis and interaction with
scholarly documents. By integrating diferent media representations, this work pursues the enhancement
of document understanding and interaction, addressing specific challenges in document processing.
In particular, our main achievements can be identified in:
• building a comprehensive interface to allow multi-page interaction with scholarly documents;
• performing explainable association of layout information to visual and text data;
• enhancing detection of visual recognition anomalies exploiting semantic constraints;
• allowing multi-layer interoperability through large language models.</p>
      <p>These contributions enhance the accessibility, explainability, and interoperability of scholarly
document analysis, enabling semantic processing and navigation of academic papers.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>The review of the state-of-the-art focuses on three distinct and related research areas: document
segmentation, semantic linking, and visual question answering.</p>
      <sec id="sec-2-1">
        <title>2.1. Document segmentation</title>
        <p>
          Various approaches exist for identifying layout elements in visually structured documents, typically
targeting specific types like tables [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], formulas [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], bibliographic references [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and many others. Most
approaches rely on OCR and TEI-XML conversion; for multi-page documents, current methods like
HRDOC [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] use Mask R-CNN and language models to extract semantic regions and their relations.
        </p>
        <p>
          Regarding the conversion from PDF to a more structured format such as TEI-XML, Grobid [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] is
considered one of the ten best tool for extracting bibliography data from document images [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], allowing
multi-page analysis and being also capable of extracting other layout elements. Whole document
analysis increases the problem complexity; to this end some eforts have been made to extract relations
between pages in the form of triples [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>
          Several multi-purpose datasets exist in the area of scholarly document understanding, focusing on
layout analysis, text and visual elements extraction, and document structure identification. The largest
datasets have to deal with the multiplicity of diferent layouts which are present in scholarly articles,
addressing the complications of storing diferent data types into suitable structures. For this reason the
most extended sources of information rely on flexible data containers like XML and JSON formats, which
allow enough versatility for managing such a variety of descriptive data. Among recent datasets for
scholarly documents layout analysis Publaynet [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and DocBank [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] are considered the most relevant,
although they exhibit limited variability in contents and layout.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Semantic linking</title>
        <p>There are technical and pragmatic reasons to pursue abstract representations of knowledge with Linked
(Open) Data and ontologies. Using natural language processing and computer vision strategies to obtain
searchable content does not ensure the maintenance of the visual or logic structure of the original data,
which is essential for data context analysis and is necessary to perform structured queries and inference.
The definition of a structure capable of hosting data extracted from raw sources allows to keep the
context and easily extend it, exploiting relations which exist among data and that are not explicitly
declared.</p>
        <p>
          The most encouraging efort in the direction of a unified structure for modeling scholarly documents
can be found in the Semantic Publishing And Referencing (SPAR) ontologies project [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], which
includes several ontologies that are depicted in Figure 1. SPAR ontologies integrate models such as the
Document Components Ontology (DoCO) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] which, in turn, includes pattern ontologies, discourse
elements ontologies, bibliographic resources ontologies, citation ontologies [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], and many others
describing diferent aspects of scholarly documents.
        </p>
        <p>The Document Components Ontology is composed of a rhetorical and a structural layer: rhetorical
classes describe logical entities such as references, bibliographic references, captions, introduction,
materials, methods, results, related work and future work. The structural layer links rhetorical elements
with structural components like title, section titles, paragraphs, footnotes, tables, figures , captioned boxes,
ifgure boxes , lists, bibliographic reference list, front matter, body matter, back matter, chapters, sections,
bibliography, and abstract. Each class defines semantic relations with other classes, e.g. the class
Sentence includes DiscourseElement when it is found with the attribute inline.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Visual question answering</title>
        <p>
          Visual Question Answering (VQA) represents the main point of contact between the communities of
natural language processing and computer vision. Technologies such as conversational agents and
chatbots [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] are suitable for this purpose. These technologies can interface with neural networks and
ontologies [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], exploiting their functionalities like graph reasoning [15] for extending context. Question
answering systems can be integrated and trained to respond to questions on both visual and contextual
information. Retrieval Augmented Generation (RAG) [16] can further enhance these capabilities by
combining retrieval to access custom knowledge bases and provide more accurate answers.
        </p>
        <p>Toolformer [17] and KnowledGPT [18] integrate knowledge bases to Large Language Models (LLMs)
with program-of-thought prompting, allowing questions requiring broader context knowledge. An
efective application of RAG to scholarly articles can be found in ChatDOC 1 and in PaperQA [19],
describing RAG agents that can answer scientific questions. Document images pose distinct challenges
due to their spatially organized elements and the combination of visual and textual information. To this
end LayoutLM [20] introduces 2D position embeddings, merging visual and text embeddings.</p>
        <p>The main limitation of current research in scholarly documents VQA can be found in its reliance on
page images, restricting the analysis to single-pages and disregarding semantic context. Some eforts
have been made in this direction [21]. VQA datasets supporting multi-page documents are hard to find;
among the most recent, comprehensive resources can be found in the MP-DocVQA dataset [22], the
GRAM dataset [23] and the DUDE dataset [24]. The DUDE dataset includes a wide range of document
types and sources, covering diverse topics and layouts, and allows full support for multi-page analysis,
however having limited layout semantics. The lack of valuable multi-page datasets can also be addressed
through document generation [25]. The most comprehensive resource for scholarly document analysis,
in the best of our knowledge, is the Semantic Scholar Open Research Corpus (S2ORC) dataset [26].
S2ORC is composed by 8.1M open-access PDF-parsed papers across diferent academic disciplines and
ofers full reproducibility.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. System architecture</title>
      <p>The proposed architecture (Figure 2) is aimed at extracting diferent layers of information from
multipage scholarly articles exploiting state-of-the-art tools; future work is represented as dashed elements
and bracketed labels. To achieve a comprehensive characterization of diferent kinds of layout elements,
document data is extracted with vision, natural language, and semantic technologies. Information is
made accessible altogether through conversational agents based on language models.</p>
      <sec id="sec-3-1">
        <title>3.1. Document segmentation module</title>
        <p>To extract geometric information a segmentation strategy aimed at identifying layout categories
and their properties is presented. The PDF articles are converted to TEI-XML format through the</p>
        <sec id="sec-3-1-1">
          <title>Grobid API2 in order to estimate the PDF structure as an XML tree. The resulting output contains</title>
          <p>the recognized structures which are title, doi, keywords, abstract, authors, authors data, emails, tables,
ifgures , captions, formulas, dates, sections/subsections, acknowledgments, bibliographic entries and raw text
blocks. Positional information includes page number and is present for most classes. Some structures
have deeper characterization, for instance author consolidation is made through the integration with
CrossRef APIs.</p>
          <p>The output of Grobid processing is parsed through Beautiful Soup to extract the TEI tags and serialize
the information into key-value pairs (Figure 3). The coordinates of semantic elements are then used to
draw bounding boxes on the original document and to associate a label to each semantic region. The
hierarchy of the document is also extracted. The recognized layout elements that are provided with
geometric information are highlighted in the user interface through bounding boxes (Figure 4) and the
information that is not provided with spatial data is associated to them as linked pop-ups.</p>
          <p>The features in development that are related to the document segmentation module are represented
in Figure 2 as dashed elements and bracketed labels.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Semantic linking module</title>
        <p>
          The semantic characterization assigned to the extracted information is derived from the Document
Components Ontology [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], a specialized ontology designed for modeling the layout elements of
scholarly and research documents. The semantic network provided by DoCO is imported into the
1https://chatdoc.com/
2https://kermitt2-grobid.hf.space/
        </p>
        <p>INFORMATION EXTRACTION
D
o
c
u
m
e
n
t
s
e
g
m
e
n
t
a
it
o
n</p>
        <p>Multi-page PDF</p>
        <p>GROBID
system, enabling a structured and standardized representation of document components. To ensure
compatibility, a detailed mapping process is performed between the Grobid XML tags used for document
parsing and the corresponding DoCO classes. This mapping is carried out by aligning the typical
organization and content structure of a research article, ensuring semantic coherence and consistency
across the extracted data. As detailed in Section 5 the semantic characterization of layout elements is
leveraged to enhance visual recognition, exploiting the relations defined among the ontology classes to
detect unfounded overlaps.</p>
        <p>The features in development that are related to the semantic linking module are represented in
Figure 2 as dashed elements and bracketed labels.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Question answering module</title>
        <p>The question answering module is managed exploiting LLMs, specifically the Llama3 model through the
Ollama Python API. This model has been chosen because of its ease of installation and integration with
custom scripts and external resources. The question answering module is designed to enable the LLM
to access the serialized output of Grobid, which is stored in a shared memory (Figure 3). The results
obtained with Llama3 are excellent, ensuring an adequate understanding of the questions given the
resources provided and their eventual lack. The response is fairly fast, even though the local installation
does not have access to significant computational resources.</p>
        <p>The user question is augmented and proposed to the LLM in the form:</p>
        <p>"Given that: log_data, question"
where log_data represents the output of the modules described in Sections 3.1 and 3.2 and question
is the query input by the user through the user interface. The system context is given to the LLM as:
"The questions will be about a scholarly article from which some data has been extracted in structured
form and given as context."
PDF excerpt
not of
&lt;
&lt;
8
&lt;div type ="acknowledgement "&gt;
&lt; div &gt;
&lt; head coords ="8,317 .96,208 .59,91.99,9.37 "&gt;Acknowledgment
&lt; p coords ="8,317 .69,223 .29,240 .52,7.94;</p>
        <p>8,317 .96,234 .24,240 .42,7.94; 8,317 .96,245 .20,149 .63,
&lt; s coords ="8,317 .69,223 .29,240 .52,7.94;
8 ,317 .96,234 .24,99.42,7.94 "&gt;</p>
        <p>The views expressed here are those of the authors alone</p>
        <p>&lt;rs type ="institution "&gt;BlackRock , Inc or NVID
/s&gt;
s coords ="8,419 .62,234 .24,138 .76,7.94;
,317 .96,245 .20,149 .63,7.94 "&gt;</p>
        <p>We are grateful to &lt;rs type ="person "&gt;Emma Lind &lt;/rs
invaluable support for this collaboration .
&lt; /s&gt;
&lt; /p&gt;
&lt; /div &gt;
&lt;/div &gt;</p>
        <p>TEI-XML
Acknowledgements page : 8
Acknowledgements person : Emma Lind
Acknowledgements text :
The views expressed here are those of the authors alone and not of BlackRock
We are grateful to Emma Lind for her invaluable support for this collaboration
, Inc or NVIDIA .</p>
        <p>.</p>
        <p>Acknowledgements coordinates
Acknowledgements coordinates
Acknowledgements coordinates
Acknowledgements coordinates
Acknowledgements coordinates
Acknowledgements coordinates
Acknowledgements coordinates
Key-value pairs
restrictive both for the output of the Grobid and DoCO modules and for the extracted key-value pairs.
To address the LLM context length issues and limiting the context to the part that is most pertinent to
the question, the user interface described in Section 4 allows the user to provide a classification of the
questions choosing any number of labels among: Article_title, Author, Abstract, Caption, Caption_Figure,
Figure, Table, Formula, Section, Link, Note, Acknowledgments, and Reference. These classes correspond to
Grobid-extracted TEI-XML tags, which are mapped to the DoCO ontology entities. The labels provided
by the user are exploited to split the context to be given to the LLM, retaining only the portions that
constitute the object of the classification.</p>
        <p>The features in development that are related to the LLM module are represented in Figure 2 as dashed
elements and bracketed labels.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. User interface</title>
      <p>The user interface (Figure 4) is composed of five web pages interacting with the Python server that
routes the user choices. Through this interface the user is able to upload a PDF document (Upload
page) and to process it (PDF processor), extracting information which is exploited by the LLM for
accessing information. Upon PDF processing, its output is shown to the user, which is the document
augmented with bounding boxes that span the classes described in Section 3.1. The process of drawing
bounding boxes is managed by generating a separate PDF layer for each class, each layer being assigned
distinct colors, and then overlapping these layers onto the original input PDF to visually represent
the annotations. The whole process is completed in a variable amount of time, mainly depending on
the input PDF length and network capabilities, since Grobid is used as a network service. Processing
a 10/20-page PDF takes a few seconds, while longer papers may require more than 10 seconds. The
serialized information is used as context for the LLM, which is included in the interface to facilitate
user interaction and exploration of the system’s functionalities. The LLM computation time for each
question varies based on local GPU capabilities, generally taking a few seconds.</p>
      <p>Since LLMs have context length issues, the context of the question is chosen by the user filtering
by question topic, which can be any number of layout classes, as detailed in Section 3.3. The interface
includes a graphical preview of data, which consists of PDF images with bounding boxes overlaying the
layout elements associated with coordinates in the Grobid output, each provided with a layout element
label. In addition, informative pop-ups containing all data retrieved by data processing are present.
The user interface includes also a specific perspective ( Overlap violations) that is designed to outline
the semantic integration described in Section 5. The purpose of this view is to highlight the layout
elements that overlap violating the imported ontology constraints.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Constraint violations analysis</title>
      <p>To understand how the relations defined in an ontology can be applied to visual extracted classes
and how they can improve the classification of layout elements, we analyze the interactions among
the DoCO ontology classes3. The idea is to exploit the ontology relations that occur between layout
elements to check the admissibility of geometric overlaps.</p>
      <p>Since the assertions defined in the ontology can involve objects lacking spatial characterization in the
Grobid-extracted counterpart, it is essential to identify the relations among objects with coordinates.
Then, a geometric notion for each relation between class instances in the format (page, x, y,
width, height) has to be defined. Afterwards, ontology constraints can be applied to detect visual
recognition errors like overlap errors (Figure 5).</p>
      <p>We distinguish overlaps as geometric overlaps and semantic overlaps, since the former can be admissible,
while the latter are most likely recognition errors. We need to determine whether an overlap is admissible
3https://sparontologies.github.io/doco/current/doco.html#d4e145
(e.g., a Name object overlapping with an Author object is admissible, whereas a Title object overlapping
with a Figure object is not) based on constraints defined in a formal model. To allow constraint
verification the DoCO ontology relations are exploited; to express inadmissible layout element overlaps
the owl:disjointWith relation is considered.</p>
      <p>To formally determine which two-dimensional elements overlap in a context where we have
coordinates defining their position on a page, we can treat the elements as rectangles defined by the following
properties:
• Page: the page number (if two elements are on diferent pages, they cannot overlap)
• x, y: the coordinates of the top-left corner of the rectangle
• width, height: the width and height of the rectangle
Overlap Criterion</p>
      <p>Two elements overlap if and only if their rectangles intersect in a two-dimensional space. Formally,
given two rectangles defined by:
1. Rectangle A:
a) ,  (coordinates of the top-left corner)
b) width, height
2. Rectangle B:
a) ,  (coordinates of the top-left corner)
b) width, height</p>
      <p>To check for overlap, we need to verify whether there is no separation between the two rectangles in
both the horizontal and vertical dimensions.</p>
      <p>Conditions for Non-Overlap
1. The rectangles do not overlap if one is entirely to the right of the other:
2. The rectangles do not overlap if one is entirely below the other:
 + width ≤</p>
      <p>or  + width ≤ 
 + height ≤ 
or  + height ≤</p>
      <p>Then, two rectangles  and  overlap if none of the above conditions are true. Formally, they overlap
if all the following conditions persist:
 &lt;  + width ,
 &lt;  + height ,
 &lt;  + width
 &lt;  + height</p>
      <p>To identify the overlapping errors we focus on the rectangles which overlap and for each we classify
it as admissible or not admissible, checking as not admissible an overlap generated by classes which are
disjoint in the ontology general axioms4 and that are reported in Table 1. The Overlap violations
section of the user interface described in Section 5 helps to highlight the bounding boxes which are
associated to classes that are not allowed to overlap (Figure 6).
back matter, body matter, captioned box, chapter, complex run-in quotation, footnote, formula, formula box,
front matter, list, part, section, table
abstract, afterword, appendix, colophon, foreword, glossary, index, list of figures, list of tables, preface,
table of contents
label, paragraph, subtitle, title
list of authors, list of contributors, list of organizations
sentence, simple run-in quotation, text chunk</p>
    </sec>
    <sec id="sec-6">
      <title>6. Future work</title>
      <p>Future directions involve leveraging LaTeX source attributes, the analysis of more relations and
ontologies, and a broader employment of the LLM for context length optimization.</p>
      <p>Currently, the system allows the exploration of visual data which is extracted through the
segmentation module. More modules can be linked for extending the knowledge associated with explorable
elements, such as the LaTeX representation of the document [27]. Associating diferent
representations of the document elements would also enable automatic construction of class-specific datasets,
i.e. formulas and chemical structures datasets. Moreover, the custom-made user interface described in
4https://sparontologies.github.io/doco/current/doco.html#generalaxioms</p>
      <p>Section 5 allows code instrumentation, enabling comparison with state-of-the-art systems with similar
purposes such as ChatDOC and Amazon Textract.</p>
      <p>The variety of ontology relations employments can be extended for identifying more than overlap
errors. To this end, parent relations can be exploited to detect misclassifications and elements missing
paired classes (i.e. figures and captions). In addition, the presence of the ontology layer enables the
possibility of extending the present structure with broader context ontologies [28] and exploiting
reasoning capabilities to expand actual relations with the inferable ones.</p>
      <p>The main limitation of the LLM module lies in its reliance on user classification of the query, aimed
at reducing context length. The same result is achievable through unsupervised classification of the
user query, which can be demanded to a dedicated LLM module. It is also to be noticed that the LLM
performances would increase by employing language models with more parameters.</p>
      <p>Current objectives include an assessment over the Docbank dataset, which contains numerous layout
classes and overlaps such as caption over figure , list over equation and section over author, among others,
an excerpt of which is presented in Figure 7. It is to be noticed that the present analysis on the Docbank
dataset does not take into account semantic characterization, thus including some overlapping layout
elements that are admissible (i.e. equation over list).</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions</title>
      <p>This paper extends scholarly document understanding and document question answering research
ifelds pursuing the association of semantics and context to visual features, integrating them in a
comprehensive interface which allows multi-layer exploration via LLM and interactive visualization.
Linking semantic information to documents is challenging from a research perspective: most of the
solutions reviewed in the state-of-the-art exhibit limited awareness of the described domain, considering
only basic relations between text chunks. We leverage the Document Components Ontology focusing
on semantic relations among layout elements to detect a specific kind of visual recognition errors,
which are overlap errors, paving the way for more sophisticated layout elements interaction analysis.
The proposed approach is based on the use of disjointness relations that may exist between overlapping
layout elements. This relation is analyzed and interpreted as an indicator of potential recognition errors,
providing a systematic way to identify and address inconsistencies in the detected layout structure. By
exploiting this property, our method improves the accuracy and reliability of the recognition process.
In addition, we exploit LLMs in our framework to enhance the accessibility of diverse information
which is not directly available from data, enabling navigation of diferent kinds of information from an
integrated interface.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments References</title>
      <sec id="sec-8-1">
        <title>This research has been partially funded by CAI4DSA5 actions (Collaborative Explainable neuro-symbolic</title>
        <p>AI for Decision Support Assistant), of the FAIR national project on artificial intelligence, PE 1 PNRR
(https://fondazione-fair.it/).</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gemelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Vivoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marinai</surname>
          </string-name>
          ,
          <article-title>Graph neural networks and representation embedding for table extraction in pdf documents</article-title>
          ,
          <source>in: 2022 26th International Conference on Pattern Recognition (ICPR)</source>
          , IEEE,
          <year>2022</year>
          . URL: http://dx.doi.org/10.1109/ICPR56361.
          <year>2022</year>
          .
          <volume>9956590</volume>
          . doi:
          <volume>10</volume>
          .1109/icpr56361.
          <year>2022</year>
          .
          <volume>9956590</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Do</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Ngo</surname>
          </string-name>
          ,
          <article-title>A robust framework for mathematical formula detection</article-title>
          ,
          <source>in: International Conference on Multimedia Analysis and Pattern Recognition</source>
          ,
          <string-name>
            <surname>MAPR</surname>
          </string-name>
          <year>2021</year>
          , Hanoi, Vietnam,
          <source>October 15-16</source>
          ,
          <year>2021</year>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . URL: https://doi.org/10.1109/ MAPR53640.
          <year>2021</year>
          .
          <volume>9585197</volume>
          . doi:
          <volume>10</volume>
          .1109/MAPR53640.
          <year>2021</year>
          .
          <volume>9585197</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lopez</surname>
          </string-name>
          , Grobid, https://github.com/kermitt2/grobid, 2008-
          <fpage>2024</fpage>
          . arXiv:1:dir:dab86b296e3c3216e2241968f0d63b68e8209d3c.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ma</surname>
          </string-name>
          , J. Du,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Zhang,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , C. Liu,
          <article-title>Hrdoc: Dataset and baseline method toward hierarchical reconstruction of document structures</article-title>
          , in: B.
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
          </string-name>
          , J. Neville (Eds.),
          <source>Thirty-Seventh AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI</source>
          <year>2023</year>
          , Washington, DC, USA, February 7-
          <issue>14</issue>
          ,
          <year>2023</year>
          , AAAI Press,
          <year>2023</year>
          , pp.
          <fpage>1870</fpage>
          -
          <lpage>1877</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V37I2.25277.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tkaczyk</surname>
          </string-name>
          , A. Collins,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sheridan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Beel</surname>
          </string-name>
          ,
          <article-title>Evaluation and comparison of open source bibliographic reference parsers: A business use case</article-title>
          ,
          <source>arXiv preprint arXiv:1802</source>
          .
          <volume>01168</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arif Demirtaş</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Oral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Yasin</given-names>
            <surname>Akpınar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Deniz</surname>
          </string-name>
          ,
          <article-title>Semantic parsing of interpage relations</article-title>
          ,
          <source>in: 2022 26th International Conference on Pattern Recognition (ICPR)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1579</fpage>
          -
          <lpage>1585</lpage>
          . doi:
          <volume>10</volume>
          . 1109/ICPR56361.
          <year>2022</year>
          .
          <volume>9956546</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jimeno-Yepes</surname>
          </string-name>
          ,
          <article-title>Publaynet: Largest dataset ever for document layout analysis</article-title>
          ,
          <source>in: 2019 International Conference on Document Analysis and Recognition</source>
          ,
          <string-name>
            <surname>ICDAR</surname>
          </string-name>
          <year>2019</year>
          , Sydney, Australia,
          <source>September 20-25</source>
          ,
          <year>2019</year>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1015</fpage>
          -
          <lpage>1022</lpage>
          . URL: https://doi.org/10.1109/ICDAR.
          <year>2019</year>
          .
          <volume>00166</volume>
          . doi:
          <volume>10</volume>
          .1109/ICDAR.
          <year>2019</year>
          .
          <volume>00166</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Docbank: A benchmark dataset for document layout analysis</article-title>
          ,
          <source>arXiv preprint arXiv:2006</source>
          .
          <volume>01038</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Peroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shotton</surname>
          </string-name>
          ,
          <article-title>The spar ontologies</article-title>
          ,
          <source>in: The Semantic Web-ISWC</source>
          <year>2018</year>
          : 17th International Semantic Web Conference, Monterey, CA, USA, October 8-
          <issue>12</issue>
          ,
          <year>2018</year>
          , Proceedings,
          <source>Part II 17</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>119</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Persiani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Daquino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Peroni</surname>
          </string-name>
          ,
          <article-title>A programming interface for creating data according to the spar ontologies and the opencitations data model</article-title>
          ,
          <source>in: European Semantic Web Conference</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>322</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Peroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pettifer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Shotton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vitali</surname>
          </string-name>
          ,
          <article-title>The document components ontology (doco</article-title>
          ),
          <source>Semantic Web</source>
          <volume>7</volume>
          (
          <year>2016</year>
          )
          <fpage>167</fpage>
          -
          <lpage>181</lpage>
          . URL: https://doi.org/10.3233/SW-150177. doi:
          <volume>10</volume>
          .3233/ SW-150177.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Peroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shotton</surname>
          </string-name>
          ,
          <article-title>Fabio and cito: ontologies for describing bibliographic resources and citations</article-title>
          ,
          <source>Journal of Web Semantics</source>
          <volume>17</volume>
          (
          <year>2012</year>
          )
          <fpage>33</fpage>
          -
          <lpage>43</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Swathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. C.</given-names>
            <surname>Akshay</surname>
          </string-name>
          ,
          <article-title>Natural language query formalization to sparql for querying knowledge bases using rasa</article-title>
          ,
          <source>Progress in Artificial Intelligence</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>193</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Massai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nesi</surname>
          </string-name>
          , G. Pantaleo,
          <article-title>Paval: A location-aware virtual personal assistant for retrieving geolocated points of interest and location-based services</article-title>
          , Engineering Applications of Artificial Intelli-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>