<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>2BFAIR: Framework for Automated FAIRness Assessment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Leonardo G. Azevedo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eduardo Caroli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Beatriz S. Corrêa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viviane T. da Silva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IBM Research</institution>
          ,
          <addr-line>Av. República do Chile, 330, 20031-170, Rio de Janeiro, RJ</addr-line>
          ,
          <country country="BR">Brazil</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>PUC-Rio, Rua Marquês de São Vicente</institution>
          ,
          <addr-line>225, 22451-900, Rio de Janeiro, RJ -</addr-line>
          <country country="BR">Brazil</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>0</volume>
      <fpage>1</fpage>
      <lpage>02</lpage>
      <abstract>
        <p>The FAIR Principles intend to act as guidelines to enhance the Findability, Accessibility, Interoperability, and Reusability of digital objects. Identifying how close a digital object is to abiding by the FAIR principles' characteristics (i.e., compute its FAIRness) is a challenge tackled by several automated tools. However, no such tools fully support the main requirements for automated assessment, primarily the customization of the FAIRness evaluation according to community needs. This work presents 2BFAIR, a framework for automated FAIRness assessment. As a framework, 2BFAIR provides points of flexibility that its users can customize to fit a community's specific needs of FAIRness evaluation. On the other hand, 2BFAIR encapsulates complex logic common to a family of related issues required by any FAIRness evaluation into pieces of code that the user does not have to change to create a tool based on 2BFAIR. We provide 2BFAIR with a default implementation, i.e., a tool implemented using the 2BFAIR framework. We analyzed this tool against the tools of the state-of-the-practice to demonstrate the usefulness of 2BFAIR. 2BFAIR supports 87% of the requirements that automated tools for FAIRness assessment should meet while other tools reach at most 74%. It makes 2BFAIR a good choice for implementing tools tailored to community needs.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;FAIR Principles</kwd>
        <kwd>FAIRness assessment</kwd>
        <kwd>FAIRness evaluation</kwd>
        <kwd>FAIRness</kwd>
        <kwd>Automated tools for FAIRness assessment</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Enhancing knowledge discovery for human and computational agents is a challenge for data-intensive
sciences, involving accessing, integrating, and analyzing task-appropriate data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Wilkinson et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
proposed the FAIR principles, a set of 15 recommendations for improving the Findability, Accessibility,
Interoperability, and Reusability of digital resources [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The FAIR principles are supposed to be
domainindependent, aim to facilitate the reuse of data by both humans and machines [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and are related to
several Semantic Web standards [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Research involving digital objects benefits from applying the
FAIR principles to ensure goals like transparency, reproducibility, and reusability.
      </p>
      <p>
        Several mechanisms aim to evaluate a digital object’s so-called ‘FAIRness level’ [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. FAIRness
corresponds to a value (e.g., a percentage) indicating how close a digital object is to abiding by the
FAIR principles. Such mechanisms include methods, processes, data maturity models, questionnaires,
and (semi-)automated tools. Manual mechanisms are essential to improve overall understanding and
appreciation of the research life cycle [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]; nevertheless, assessing FAIRness with them is time-consuming,
requires experience, carries dificulties when inspection is needed, and does not scale when considering
several digital objects [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. An automated tool is more appropriate to handle those issues.
      </p>
      <p>
        This work presents 2BFAIR, a framework for automated FAIRness assessment aiming to support
full customization of the evaluation according to community characteristics. Framework development
promotes the reuse of design and source code through its ability to support the generation of applications
directly related to a specific domain ( i.e., a family of related problems) by reusing the framework’s
code [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In our work, we are proposing 2BFAIR framework to tackle the challenge of customizability
of FAIRness assessment. 2BFAIR provides points of flexibility ( i.e., hot spots) that its users can customize
to fit a community’s specific needs of FAIRness evaluation. On the other hand, 2BFAIR encapsulates
complex logic common to a family of related issues required by any FAIRness evaluation into pieces of
code that the user does not have to change (i.e., frozen spots) to create a tool based on 2BFAIR [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>We provide 2BFAIR with a default implementation, i.e., the 2BFAIR code is available along with a
FAIRness evaluation tool developed using the 2BFAIR framework. When users clone the 2BFAIR GitHub
repository, they get the framework and the default implementation code base. This implementation
consists of a set of instantiated evaluators, a structured configuration, and an implementation of a
(meta)data collector. It aims to be as broad as possible so that users can customize community-specific
details or use it as an example to create their tool using the framework.</p>
      <p>Examples of 2BFAIR users are software developers and end users. Developers may use the framework
to create FAIRness evaluation tools. End users can use the default implementation to perform FAIRness
assessment of their digital objects. Although developers have plenty of customization options by
working directly on the code, end users can still customize the assessment by adjusting configuration
parameters which do not require much technical skills.</p>
      <p>We analyze this default implementation of 2BFAIR against the tools of the state-of-the-practice based
on a set of requirements elicited from the literature. Our goal is not to prove it is much better than
existing tools, although it slightly outperforms them. We demonstrate 2BFAIR capabilities and coverage
of the requirements. The results show that 2BFAIR is a good choice for executing FAIRness assessments
customized to user needs and as the base to creating new tools using the its framework.</p>
      <p>The remainder of this work is divided as follows. Section 2 presents the related work. Section 3
presents the 2BFAIR framework. Section 4 presents the analysis of 2BFAIR, and compares it against
state-of-the practice tools. Finally, Section 5 presents the conclusion and proposals for future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>There are several automated tools for FAIRness evaluation. Automated tools automatically inspect
the digital object’s metadata and data representations to compute its FAIRness level. Examples of
inspections executed by some tools are: assessing if the digital object identifier is globally unique
by checking if the value of the identifier matches known GUID (Globally Unique Identifier) schemas
like URLs, IRI, and DOI; evaluation if the identifier is persistent by checking if the identifier matches
persistence schema (e.g., PURL, W3ID, DOI, and ARK); assessing the richness of metadata by searching
for metadata terms like creator, title, publication date, publisher, summary, and keywords.</p>
      <p>
        In a previous work [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], we conducted a systematic literature review (SLR) to discover automated
tools for FAIRness assessment in the literature. A SLR is a research method with steps to organize
the review methodically [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. We searched the tools at three digital libraries (Scopus1, IEEE Digital
Library2, and ACM Digital Library3). As a result, we found the automated tools: F-UJI [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], FAIR
Evaluator [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and FAIR-Checker [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        In this work, we expand the analyzed tools by including the ones listed by FAIRassist.org.
FAIRassist.org [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is a catalog that collects and describes resources aimed at helping developers and
stakeholders make their digital objects FAIR. FAIR developers manually register their tools in FAIRassist.org.
Afterward, a FAIRassist.org curator reviews the registration and adds it to FAIRassist if accepted. So,
besides the automated tools we found in our literature review, FAIRassist.org lists4: FAIR Enough [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ],
FAIR EVA (Evaluator, Validator &amp; Advisor) [
        <xref ref-type="bibr" rid="ref18 ref8">18, 8</xref>
        ], O’FAIRe [
        <xref ref-type="bibr" rid="ref19 ref6">6, 19</xref>
        ], HowFAIRis [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], FOOPS! [
        <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
        ],
FAIROs [23], CLARIN Metadata Curation Dashboard [24] [25], OpenAIRE Validator [26, 27], PresQT
      </p>
      <sec id="sec-2-1">
        <title>1https://www.scopus.com</title>
        <p>2https://ieeexplore.ieee.org
3https://dl.acm.org/dl.cfm
4Tools listed on March 17th, 2025.
(Preservation Quality Tool) [28, 29], and SciScore [30, 31].</p>
        <p>Since the tools have diferent characteristics and goals, we define inclusion and exclusion criteria to
analyze similar tools.</p>
        <p>As an inclusion criterion, we consider only the tools that execute the following activities: (i) Receive
a data digital object identifier ( e.g., URL or DOI5); (ii) Resolve this identifier to an object (like a landing
page); (iii) Evaluate its FAIRness; and, (iv) Return results, e.g., including FAIRness levels, metrics and
test execution descriptions, and improvement recommendations. This general process fits diferent use
cases, and the user only passes the digital object identifier to execute the assessment.</p>
        <p>As exclusion criteria, we do not consider tools that meet at least one of the following characteristics:
(a) Do not provide free access to run; (b) Do not target data digital objects; (c) Use other tools to perform
the FAIRness assessment that we consider in our analysis or are discarded by any exclusion criteria.</p>
        <p>
          Considering these criteria, we do not analyze the following tools. (a) OpenAIR Validator is not free
to use [27]. (b) HowFAIRis targets FAIR software. It evaluates software code available in GitHub6
or GitLab7 [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. It does not evaluate data digital objects. (c) SciScore evaluates scientific articles
concerning transparency and reproducibility [30]. It does not evaluate the FAIRness of data digital
objects. (d) PresQT uses FAIR Evaluator to perform automated assessment [32]. FAIR Evaluator is
already considered in our analysis. (e) O’FAIRe evaluates ontologies available in the AgroPortal. It
considers the ontology representation and the portal characteristics to perform the assessment [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. The
service is tied to the portal and cannot be used independently, i.e., the tool cannot be used considering
the process defined as our inclusion criteria. (f) FOOPS! targets ontology. It is a Web service that
receives as input an OWL ontology or SKOS thesauri and returns their level of FAIRness [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. It does
not evaluate data digital objects. (g) FAIROs uses F-UJI and FOOPS! to perform the assessment. F-UJI is
already considered in our work, and FOOPS! is discarded.
        </p>
        <p>
          The application of inclusion and exclusion criteria results in the following tools: F-UJI, FAIR Evaluator,
FAIR-Checker, FAIR Enough, and FAIR EVA. We analyze these tools considering the requirements
we elicited in the SLR we conducted in a previous work whose goal was to [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The requirements
were elicited by reading the papers resulting from the SLR and the papers describing the tools. We
created a requirement for any mention of a desirable feature for an automated tool. The purpose of the
requirements was to guide the appraisal and development of tools that efectively and automatically
evaluate FAIRness. In the following list, we present the requirements, highlighting in bold the words
we use to reference the them and the papers from which the requirements were identified.
• R1: The tool should be fully automated [33].
• R2: The tool should compute digital object FAIRness grade [
          <xref ref-type="bibr" rid="ref4">4, 34, 35, 36</xref>
          ].
• R3: The tool should be autonomous, i.e., it should work independently from a specific domain,
digital objects, or framework [36].
• R4: The tool should present which principles are evaluated [34].
• R5: The tool should specify the types of digital object it assesses (e.g., data objects, data
repositories, workflows, software) [
          <xref ref-type="bibr" rid="ref15 ref7">7, 15</xref>
          ].
• R6: The tool should be exposed as APIs (e.g., as RESTful services [37]) [33].
• R7: The tool should be available on the internet as a Web application [33, 35].
• R8: The tool should indicate its usage license [34].
        </p>
        <p>
          • R9: The tool should state its development stage [34].
• R10: The tool should be customizable according to the type of digital object and community [
          <xref ref-type="bibr" rid="ref4">4,
35</xref>
          ].
• R11: The tool should allow the user to supply authentication credentials [35].
• R12: The tool should provide a visual representation (e.g., a badge of the FAIR assessment
results [
          <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
          ].
• R13: The tool should present the data lifecycle phases it supports [
          <xref ref-type="bibr" rid="ref15">15, 38</xref>
          ].
• R14: The tool should rely on FAIR-enabling services (e.g., FAIRsharing.org, identifiers.org) to
perform the assessment [
          <xref ref-type="bibr" rid="ref15 ref7">7, 15</xref>
          ].
• R15: The tool should ofer guidance on how it is used (e.g., providing user manual, help,
publications, or explanatory tips) [34].
• R16: The tool should require little expertise to use [34].
• R17: The tool should export machine-actionable results (e.g., JSON or RDF) [33, 35].
• R18: The tool should disclose its rating system [36].
• R19: The tool should be informative, i.e., it should teach the user about the FAIR principles [36].
• R20: The tool should give recommendations on how to improve the FAIRness of the evaluated
resource [34, 36].
• R21: The tool should export results readable in natural language [36].
• R22: The tool should store evaluation results in a searchable resource [35].
        </p>
        <p>
          • R23: The tool should support versioning of FAIRness assessments [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>
          In that previous work [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], we analyzed F-UJI, FAIR Evaluator, FAIR Checker, and FAIR Enough
according to these requirements by reading their documentation, papers, and available code. In this
work, we add FAIR EVA and 2BFAIR as well as evaluate all the tools by executing them in practice. The
new results are presented in Table 1. We do not detail the evaluation due to the lack of space, although
we summarize it. We compare the tools against 2BFAIR in Section 4.
        </p>
        <p>
          F-UJI [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] automatically evaluates the FAIRness of research data objects according to FAIRsFAIR
metrics8. The tool’s primary entities are Principles, Metrics, and Tests. Each FAIRness test is executed
using a pass-or-fail approach and returns a score and a maturity level to represent an overview of the
digital object’s fitness to each FAIR principle. F-UJI is available as a RESTful Web service and has a Web
Client that renders a FAIRness badge. The service9 and the Web Client10 source codes are available on
GitHub under the MIT License. The tool is only customizable by implementing new tests or deactivating
some of them by altering the source code. The evaluation results are returned in JSON with scores,
practical tests, inputs and outputs, and the evaluation context for each metric.
        </p>
        <p>
          FAIR Evaluator [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] is a framework not tied to a specific domain and can be adapted to fit community
needs, i.e., the stakeholders may participate in the creation of community-specific Maturity Indicators,
develop their own compliance tests, and define which tests to use in an evaluation. The FAIR Evaluator
is primarily designed for mechanized interaction through its RESTful Web Service API and provides
a demonstrative user interface [40] for form-based access. The client-side interface is implemented
in JavaScript using the AngularJS framework. The backend service is available as a Ruby on Rails
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>8The version V0.5 of the metrics are available at Zenodo [39]. 9https://github.com/pangaea-data-publisher/fuji 10https://github.com/MaastrichtU-IDS/fairificator</title>
        <p>application. The codes of the framework11 and the front-end12 are available on GitHub under the MIT
license. The FAIR Evaluator has 15 defined Maturity Indicators, evaluated by 22 Compliance Tests, that
the user can group in a collection. This feature lets the user indicate which metrics should be considered
for an assessment. For deeper customization, however, the user should have software development
skills to create new tests or change existing ones by refactoring the source files.</p>
        <p>
          FAIR Enough [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] implementation is based on FAIR Evaluator and F-UJI tools. It is available as a
RESTful Web service implemented in Python using FastAPI13 and provides a frontend Web interface
implemented using React14. The code is available in GitHub15 under MIT license. The tool does
not present a visual representation summarizing the evaluation results. The tool and the tests are
customizable but require technical skills. Test implementation uses FAIRsharing services. There is
little documentation available on GitHub. The Web Client is easy to use in the same way as the FAIR
Evaluator. However, Web service development skills are required to use the RESTful Web API.
        </p>
        <p>
          FAIR-Checker [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] can evaluate any digital object as long as they are described by metadata available
on a landing page. The tool execution process starts with the user supplying a URL as input. Then,
it extracts semantic annotations from the Web page to create the first version of a Knowledge Graph
(KG)16. Public KGs are queried using SPARQL17 during the FAIRness evaluation against metrics tailored
11https://github.com/FAIRMetrics/Metrics
12https://github.com/FAIRsharing/FAIR-Evaluator-FrontEnd
13https://fastapi.tiangolo.com/
14https://react.dev/
15https://github.com/MaastrichtU-IDS/fair-enough
16“A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new
knowledge” [41].
17https://www.w3.org/TR/sparql11-query/
to the evaluation of computation ontologies according to the FAIR principles. FAIR-Checker is a Web
application developed in Python based on the Flask Web Framework and it provides a RESTful API.
The tool is available on GitHub18 under the MIT license and can be used on the Web [42]. The tool
customization requires software development skills.
        </p>
        <p>
          FAIR EVA [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is available on GitHub19 under the Apache 2.0 license. It can be deployed as a RESTful
Web service API or as a Web client, neither of which is readily available, as the user must obtain the code
from the project’s repository and deploy both the RESTful Web service and the Web client. The tool
proposes to achieve flexibility via a plugin architecture to support customization according to particular
technical features of data repositories. This customization requires adaptations of the source code.
Another possibility is to customize by altering configuration files specific to each plugin; however, this
modality ofers limited possibilities. The evaluation results contain explanations of what is evaluated
by each indicator, which aids the user in learning about the FAIR concepts related to the metric. They
detail the evaluation’s technical implementation and give technical feedback regarding the evaluation
result. In some cases, the evaluation also returns recommendations (listed as “tips”) on how to improve
the evaluated object’s score on the particular metric being assessed.
        </p>
        <p>Tools analysis: Considering values of 1, 0.5, and 0 to complete (✓), partial (◗), and no (✗) fulfillment,
respectively, to present the coverage of requirements, the tools have the percentage: F-UJI (74%), FAIR
EVA (72%), FAIR-Checker (70%), FAIR-Evaluator (63%), and FAIR Enough (63%). Customizability is a big
issue because a user should have high software development skills to customize them.</p>
        <p>
          Since no tool meets all requirements, standing out as state-of-the-art, and customizability is not
fulfilled by none of them, we decided to develop the 2BFAIR framework, inspired by F-UJI, to fill the
gaps. We present 2BFAIR in Section 3 and analyze it against the other tools in Section 4.
2BFAIR is developed as an application framework implemented in Python. As a framework [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ],
it consists of ready-to-use (frozen spots) and semi-finished building blocks ( hot spots). The overall
architecture is predefined, and to produce specific applications, the user has to adjust the hot spot
building blocks to particular needs by overriding some methods in subclasses. Its goal is to facilitate
data research stakeholders with varying software development expertise in implementing automated
FAIRness evaluators that suit their community’s needs.
        </p>
        <p>
          We developed 2BFAIR considering the requirements presented in Section 2, which were elicited in a
previous work [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. We chose this because the requirements were defined through an SLR to guide the
appraisal and development of tools that efectively and automatically evaluate FAIRness. They were
used to evaluate tools found in the literature, and this evaluation identified several gaps that should be
iflled.
        </p>
        <p>The 2BFAIR core components are the following.</p>
        <p>• Digital Object Information Collector obtains the information used to evaluate the
digital object’s level of FAIRness from a provided digital object identifier. The retrieved information
includes, e.g., the digital object’s landing page with its data and metadata.
• Core Evaluator performs individual FAIRness tests using the collected information and
computes numeric scores for the metrics that represent each FAIR principle.
• FAIRness Result Aggregator formats and groups individual test results into coherent sets,
e.g., results of metrics, principles, and dimensions20. It enriches results with information that
makes the evaluation more transparent and clarifies the used FAIR concepts.
18https://github.com/IFB-ElixirFr/FAIR-checker
19https://github.com/EOSC-synergy/FAIR_eva
20We define F, A, I, and R as FAIR dimensions.</p>
        <p>Typically, the implementation of an automated FAIRness evaluator entails developing such
components from the ground up. 2BFAIR aims to reduce the complexity and cost of this task by implementing
them as the framework modules. The evaluation is performed according to a configuration defined for
each test, reflecting the community’s preferences. The evaluation comprises Dimensions (F, A, I, or R),
Principles (F1, F2, A1, etc.), Metrics, and Tests.</p>
        <p>A Metric defines a rule to evaluate a digital object according to a principle. Examples of such rules
are the metrics defined by FAIRsFAIR [ 39] or the indicators of the RDA FAIR Data Maturity Model [43].
A Metric includes a set of Tests. A Test corresponds to the algorithm that evaluates the metric’s rule.
The evaluation of the test may result in pass, fail, or not executed. For example, for Principle
F1 (“(meta)data are assigned a globally unique and persistent identifier”), we may have the metric “A
globally unique identifier should be assigned to the data.” An example of a test for this metric would
be the algorithm to check if the value of the identifier follows the syntax of a GUID (Globally Unique
Identifier), i.e., an identifier guaranteed to uniquely identify a particular Resource, irrespective of the
context, like a URL, IRI, or DOI.</p>
        <p>The Core Evaluator module performs individual tests to attribute numeric scores to the
metrics of each FAIRness principle. Each test must be tailored to the targeted research community, e.g.,
the standards against which attributes of the Digital Object must be compared, such as identifier types,
vocabularies, metadata schemas, and the assessment process itself. The framework was designed with
the evaluation of FAIRness principles as a hot spot structured as the PrincipleEvaluator abstract
class (Figure 1).</p>
        <p>Evaluator
evaluate(DigitalObjectInfo)
1
*</p>
        <p>&lt;&lt;abstract&gt;&gt;</p>
        <p>PrincipleEvaluator
evaluate(DigitalObjectInfo)
*
1</p>
        <p>PrincipleID
1
1..* TestConfiguration</p>
        <p>UniqueIdentifierEvaluator StandardizedProtocolEvaluator FormalMetadataEvaluator DataContentMetadataEvaluator</p>
        <p>To tailor the evaluation, the user must define a class inheriting from the PrincipleEvaluator,
decorate21 the class with @principle_evaluator, set the assessed PrincipleID, specify the
conifguration of the FAIRness tests the class executes, and implement the evaluate abstract method.
This method receives a DigitalObject and returns a list of TestResult. The decorator
@principle_evaluator registers each PrincipleEvaluator in the EvaluatorRegistry and allows
them to be referenced by the evaluate method of the Evaluator.</p>
        <p>The process of a FAIRness evaluation is presented in Figure 2. The evaluation starts when the
service client calls the method evaluate of the class Evaluator passing the DigitalObject as a
parameter. The Evaluator calls EvaluatorRegistry.get_evaluators() to retrieve all the
PrincipleEvaluators that are registered. Then, it calls all PrincipleEvaluator.evaluate(), which
returns a list of TestResult corresponding to the FAIRness tests it executes. Finally, the Evaluator
creates a FAIRness evaluation response to return to the service client.</p>
        <p>2BFAIR ofers a set of utilities like the fairness_test decorator and constructors for the
TestResult. The fairness_test decorator aims to reduce boilerplate code by allowing the register of the
function that evaluates a FAIRness test. It also controls which tests should be executed and skipped
via the configuration. The constructors allow to generate results from the configuration received by
the method. FAIRness tests are implemented as functions. Each test takes a DigitalObject instance
as a parameter and returns an instance of the TestResult class. A PrincipleEvaluator executes
all the tests corresponding to the metrics to evaluate the FAIRness of a digital object according to the
guidelines of a FAIR Principle.
21A decorator allows to modify or extend the behavior of functions or methods, without changing their actual code (https:
//book.pythontips.com/en/latest/decorators.html).
evaluate(digital_object: DigitalObjectInfo)
results: FAIRResults
loop</p>
        <p>get_evaluators()
principle_evaluators: List[PrincipleEvaluator]</p>
        <p>[for each principle_evaluator]
evaluate(digital_object)
test_results: List[TestResult]
create_FAIRness_response()</p>
        <p>The configuration classes stand for FAIR Dimensions, Principles, Metrics, and Tests, depicted in
Figure 3 and described as follows.</p>
        <p>• DimensionConfiguration and PrincipleConfiguration: The user may add or remove</p>
        <p>FAIR dimensions or principles to be evaluated.
• MetricConfiguration: The metrics configuration are:
– priority: The user may define priorities to be considered for each metric. The default
is Essential, Important, and Useful [43]. An essential metric is crucial to achieve
FAIRness under most circumstances. A metric is important when it might not be of the
utmost importance under specific circumstances. A useful metric corresponds to a
nice-tohave aspect, which is not indispensable.
– score_mechanism: A metric’s score can be computed as Alternative and Cumulative.</p>
        <p>In case a metric is evaluated in alternative fashion, even if multiple FAIRness tests pass, only
one of the tests is exposed as having passed, the one with the highest score and maturity.
In the cumulative computation, the metric’s score corresponds to the sum of the scores of
tests that passed.
– alternate_test_behavior: It specifies the status and score of tests that do not pass. It
may be set to either skip or fail. In both cases, the TestResult’s score is set to 0, and
their status is set as skipped or failed.
• TestConfiguration: The test configuration attributes are:
– descriptions_for_passed, failed, missing_information: FAIRness tests can
pass, fail, or not be executed, and each attribute provides a detailed description of this status.
Missing information is used when a test is not executed due to a lack of necessary information.</p>
        <p>
          These messages are exposed in the evaluation result along with the result_description.
– agnostic_requirements: It defines characteristics to be tested. An example would be a
list of citation elements (like title, creator, and publication date) that should be found in the
metadata when executing the test that evaluates if the metadata includes citation attributes.
– community_requirements: It defines characteristics to be tested that are specific to a
community. For example, for the Chemistry community, the SMILES22 element should be
present as a core element of a chemical digital object’s metadata.
22Simplified Molecular Input Line Entry Specification
– recommendation: It ofers short, objective advice on how to improve the digital object
FAIRness. Recommendations are useful for teaching FAIR. They have an associated priority,
which reflects the gains in the degree of FAIRness they allow for when followed.
– recommendation_details: It details the recommendation on how the user can improve
the evaluated digital object’s degree of FAIRness.
– max_score: It is a value between 0 and 1 and it corresponds to the maximum value for a
test that passed. For a cumulative metric, the sum of the maximum score of all of the metric’s
tests should be equal to or less than 1. For an alternate metric, its tests may have distinct
maximum scores between 0 and 1 since only one of the tests that passes is considered for
the metric’s score.
– skip: When defined as “true”, the test is not executed.
– losses: It details how a not-passed test negatively impacts its degree of FAIRness. E.g., in
the case of a test that evaluates globally unique identifier , losses will detail how the lack of
such an identifier may make more dificult or even impossible to find a digital object.
– maturity: It defines the maturity to be applied to a metric if the test passes. The maturity
may be [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]: (a) Incomplete: The Digital Object does not address the FAIR Principles.
(b) Initial: Some initial characteristics are handled toward FAIR, although they do not
meet the Principles’ definitions. There is intent to comply with the FAIR principles and
awareness of existing issues. (c) Moderate: Limited FAIR characteristics are handled, and
there is a focus toward on complete alignment. (d) Advanced: Complete coverage of the
FAIR Principles’ characteristics aligned with organizational standards and practice.
PrincipleConfiguration
0..*
        </p>
        <p>MetricConfiguration
1
+String description
1 0..*
1</p>
        <p>1
MetricID &lt;&lt;enumeration&gt;&gt;
+String value MetricPriority</p>
        <p>1 Essential
Alt&lt;e&lt;rnenatuemTeesratBtieohna&gt;v&gt;ior IUmspeofurtlant
skip
fail
0..*
0..*</p>
        <p>1
&lt;&lt;enumeration&gt;&gt;</p>
        <p>ScoringMechanism</p>
        <p>The data encapsulated by each class of the configuration structure is, for the most part, research
community-specific. It was designed to enable non-expert stakeholders to alter the behavior of the
evaluator. The EvaluationConfiguration class parses the configuration, creates instances of the
configuration classes, and makes them available throughout the evaluation process.</p>
        <p>The FAIRness Result Aggregator module implements functionalities for formatting
test results, grouping them into coherent sets, and enriching them with relevant information. This
information makes the evaluation process more transparent, indicates how the evaluation reflects the
digital object’s level of FAIRness, and clarifies the FAIR concepts related to the evaluation.</p>
        <p>The result structure (Figure 4) is composed of the TestResult, MetricResult, PrincipleResult
and DimensionResult classes. Instances of these classes expose the information defined in the
configuration to users of the FAIRness evaluator generated by the framework. The constructor methods build
TestResult instances copying from TestConfiguration all relevant information (e.g., max_score,
recommendation_details) to be available to the user.</p>
        <p>The framework implementation and configuration allow flexibility to create FAIRness evaluation
services tailored to the community’s needs. To provide an out-of-the-box solution, 2BFAIR is also
ofered with a default implementation, i.e., we provide the 2BFAIR framework code along with a
FAIRness evaluation tool developed using the framework itself. This implementation consists of a set of
instantiated evaluators, a structured configuration, and an implementation of a (meta)data collector. It
automatically evaluates the FAIRness of digital objects according to the FAIRsFAIR metrics [39] with
a numeric score in the range of 0 to 100. It is available as a RESTful Web service (backend) using the
proposed framework, implemented in Python using the FastAPI Web framework described using the
OpenAPI23 specification.</p>
        <p>We also ofer a 2BFAIR Web application (frontend) that consumes the default implementation (the
backend). This frontend is divided into eight pages. We present some screenshots of the tool, highlighting
its main Web pages.</p>
        <p>• Evaluate: the start page where the user supplies a digital object identifier ( e.g., URI, URL, DOI)
and runs the FAIRness evaluation (Figure 5).</p>
        <p>• Result: a visual representation of the overall FAIRness result represented mainly by a badge.
• Explorer: an overview of the evaluation for each FAIR dimension (Figure 6). It is composed of
four elements: (i) A header presenting information about the evaluated object, a link to download
the evaluation result, and the FAIRness badge. (ii) A graphic presenting the score in percentage
achieved for each dimension, with boxes representing the result achieved for each evaluated
metric for the dimension. (iii) A legend (below the graphic) that explains the importance of each
metric explains each metric’s importance. (iv) A list of priority recommendations, i.e., the most
important recommendations for improving the FAIRness of the digital object.
• Details: details of the evaluation for principles, metrics, and tests (Figure 7). The elements have
the exact definition as presented for the configuration elements. For each principle, it presents:
name: the definition of the principle; score: a value between 0 and 1 corresponding to the
average of the scores of all principle’s metrics. For each metric, it presents: name, maturity, score,
and score mechanisms. For each test, it presents: status: if the test passed, failed, or was not
executed; score: a value between 0 and the maximum test score, which varies from test to test;
results: a summary of the test’s result; results details: it provides details about the test execution;
recommendations: it gives recommendations about what should be done to get a higher FAIRness
score; recommendations details: it provides details about the recommendation, e.g., steps teaching
how to execute the improvement.
• FAIR Glossary: presents the definition of the FAIR concepts used in the tool development (Figure 8.</p>
        <p>We included links in all terms that appears on 2BFAIR’s Web pages so that the user can navigate
to the Tool Glossary by clicking on the word to access its definition, and learn about FAIR.
• Tool Glossary: presents the explanation of concepts used specifically in 2BFAIR-frontend tool.
• User Guide: explanations about each page and functionality available in the 2BFAIR-frontend.
• Full Report: presents a compilation of the Result, Explorer, and Details pages in a single page so
that the user has a single view of the evaluation.</p>
        <p>2BFAIR ofers four axes of customization: (i) Definition of new tests by altering the source code;
(ii) Definition of community-specific test parameters via test-specific configuration files; (iii)
Deactivation of tests via the configuration file. (iv) Adaptation of test, metric, principle, and dimension attributes
and weights according to the community needs via the configuration file. The first kind of customization
requires software development expertise; however, the architectural pattern enforced by the 2BFAIR
framework makes this process easier and less work-intensive. A non-expert user may perform other
customizations, except setting community-specific parameters, which may require semantic web skills.</p>
        <p>The web service’s evaluation results are returned in JSON format, thus being processable by computing
agents. The code is going to be available at GitHub24. In Section 4, we present the evaluation of 2BFAIR,
comparing it against existing tools.
24The frontend and backend codes are in the process of becoming open-source by IBM. It was not concluded before the paper
submission. This URL (https://github.com/leogazevedo/2BFAIR) will include the links when the process is finished.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Tools analysis</title>
      <p>In this section, we present, in Section 4.1, how 2BFAIR caters to requirements elicited from the literature
(Section 2), and, in Section 4.2, we compare 2BFAIR against the tools of the state-of-the-practice.
4.1. 2BFAIR
2BFAIR automatically (R1:✓) evaluates the FAIRness of data digital objects (R5:✓) according to the
FAIRsFAIR metrics with a numeric score in the range of 0 to 100 (R2:✓). The evaluation can be applied
to digital objects belonging to any domain since 2BFAIR is domain agnostic, but domain-specific
evaluation criteria may be defined by the user ( R3:✓). 2BFAIR’s primary entities are Dimensions,
Principles, Metrics, and Tests. Each Dimension (F, A, I, or R) has one or more principles (e.g., F1, F2),
which has one or more metrics (e.g., F1-01D, which stands for the metric 01 for assessing Data according
to Principle F1). Metrics are automatically evaluated by tests, e.g., F1-01D-1, the test 1 of the metric
F1-01D (R4:✓).</p>
      <p>2BFAIR is available as a RESTful Web service, implemented in Python using the FastAPI Web
framework, and its API is described using the OpenAPI specification ( R6:✓). The service accepts the
identifier of the data object to be evaluated as input. 2BFAIR is also available as a standalone web
application (R7:✓), which provides a FAIRness badge ofering an overview of the evaluation results
for each dimension and a chart presenting details about metrics evaluation results (R12:✓). Currently,
2BFAIR’s service and web client’s code are publicly available under Apache-2.0 license (R8:✓). It is
under development, although it can already be used to assess data digital objects (R9:✓). 2BFAIR
supports the following stages of the data lifecycle [44]: processing and analyzing data, publishing and
sharing data, preserving data, and re-using data (R13:✓).</p>
      <p>2BFAIR ofers customization ( R10:✓) by altering the source code and adjusting configuration
parameters. The former requires software development expertise but is supported by the architectural pattern
we used to develop the framework, making this process easier and less work-intensive. The latter
can be executed by non-expert users. The exception is the setting of community-specific parameters,
which may require Semantic Web skills. 2BFAIR employs FAIR-enabling services (R14:✓). 2BFAIR still
does not support the user to supply authentication credentials to access (meta)data available in private
repositories for proper evaluation (R11:✗).</p>
      <p>2BFAIR has a glossary that explains the main concepts of FAIR; it has a user manual that describes how
to use the tool and details specific concepts of its user interface; the framework code has documentation
(R15:✓).</p>
      <p>2BFAIR’s frontend requires little expertise. The user provides only the digital object identifier to
perform an evaluation. The pages that present the results are straightforward to follow. On the other
hand, Web service development skills are required to use the RESTful Web API, though this is expected
(R16:✓).</p>
      <p>The web service’s evaluation results are returned in JSON format, thus processable by computing
agents (R17:✓). Each test result includes recommendations, losses, and a description of the results. The
result description details the criteria that caused the test to have passed, failed, or not to have been
executed (R18:✓). The loss information presents what is lost if the digital object does pass the test. The
recommendations instruct the user on how to improve the evaluated digital object so that it can pass
the test (R20:✓). Thus, each test result helps the user learn about the FAIR concepts related to the test.
Besides, the words related to FAIR used in the tool’s web interface are linked to their definitions in a
glossary of terms (R19:✓).</p>
      <p>The results are available in readable natural language in the Web application (R21:✓). However,
they are not stored in a searchable engine (R22:✗), and the versioning of FAIRness assessments is not
supported (R23:✗).
4.2. Comparison of 2BFAIR against other tools
This section presents an analysis of the tools of the state-of-the-practice (F-UJI, FAIR Evaluator, FAIR
Enough, FAIR Checker, and FAIR Eva) compared to 2BFAIR.</p>
      <p>Almost all the tools fulfill the requirements R1 to R8. They are automated (R1), present a FAIRness
grade (except FAIR-Evaluator) (R2), and execute independently from any specific domain (R3). Their
metrics reference the principles they evaluate (R4). They point out the types of digital objects they
evaluate (R5). They are available as RESTful Web services APIs (R6), provide a Web Application (R7),
and exhibit their licenses (R8).</p>
      <p>However, they difer regarding the other requirements, which are detailed below.</p>
      <p>Development stage (R9): F-UJI and FAIR-Checker present evidence of wide usage; FAIR Evaluator and
FAIR Enough do not have such proof, but none of these four tools explicitly explain their development
stage. FAIR EVA is in the beta stage, and 2BFAIR is still under development.</p>
      <p>Customization (R10): FAIR Evaluator and FAIR Enough are customizable by non-expert users when
choosing the Maturity Indicators collections to run in an evaluation. Still, they do not support
customization of the test parameters. FAIR EVA achieves customization via a plug-in architecture of specific
FAIR metrics or maturity indicators. 2BFAIR allows non-expert stakeholders to alter the behavior of the
evaluator via its configuration file, e.g., definition of community-specific test parameters, deactivation of
tests, and adaptation of test, metric, principle, and dimension attributes and weights. All tools can have
their code customized by skilled software developers with knowledge of the FAIR principles, Semantic
Web technologies, and standards. However, the architectural pattern enforced by the 2BFAIR framework
makes this process easier and less work-intensive.</p>
      <p>Authentication (R11): It is supported only by F-UJI.</p>
      <p>Badge (R12): FAIR Evaluator and FAIR Enough do not provide a visual representation that summarizes
the evaluation. F-UJI, FAIR-Checker, and FAIR EVA present visual representation as a multi-level pie
chart and radar chart, respectively. 2BFAIR provides an overview of the evaluation results for each
dimension and a chart presenting details about metrics results.</p>
      <p>Data lifecycle (R13): Only 2BFAIR explicitly presents the data lifecycle phases to which it can be
applied.</p>
      <p>FAIR-enabling services (R14): F-UJI, FAIR evaluator, FAIR Enough, FAIR EVA, and 2BFAIR employ
FAIR-enabling services. FAIR-Checker focuses on using Semantic Web technologies. All of them use
libraries and standards.</p>
      <p>Guidance (R15): The tools meet user guidance, except FAIR EVA, which has no documentation
available regarding how to use the tool, like a user manual. The other tools have plenty of documentation,
including papers and GitHub pages, with examples of using the tools. The code is open to be cloned,
which allows software engineering experts to inspect it and deepen their knowledge.</p>
      <p>Little expertice (R16): The tools Web interfaces are easy to use, requiring one to supply the digital
object identifier and a few other pieces of data. However, using their APIs requires technical skills,
which is expected.</p>
      <p>Machine-actionable results (R17): Only FAIR-Checker does not export machine-actionable results.
Rating system (R18): The tools’ reports present evidence and reasoning about evaluation.</p>
      <p>Teach (R19): The reports are very technical for F-UJI, FAIR Enough, and FAIR Checker, i.e., they are
tied to the implementation and do not teach users about FAIR. 2BFAIR, FAIR Evaluator, and FAIR EVA
teach about FAIR when the user executes them and goes through the assessment results.</p>
      <p>Recomendations (R20): FAIR-Checker and 2BFAIR are the only ones that give explicit
recommendations for FAIRness improvements.</p>
      <p>Natural language (R21): The reports generated by 2BFAIR, FAIR Checker, and FAIR EVA are fully
readable by non-experts.</p>
      <p>Searchable (R22): None of the tools store the results even in a searchable engine.</p>
      <p>Versioning (R23): The tools do not support results versioning, which would help to understand how
a digital object’s FAIRness evolves.</p>
      <p>Overall, considering values of 1, 0.5, and 0 to complete (✓), partial (◗), and no (✗) fulfillment,
respectively, to present the coverage of the requirements by the tools, they are very similar regarding
requirement fulfillment: 2BFAIR (87%), F-UJI (74%), FAIR EVA (72%), FAIR-Checker (70%), FAIR-Evaluator
(63%, and FAIR Enough (63%). They all follow good software development practices and use
state-ofthe-art technologies in Software Engineering and the Semantic Web. The reporting features should
be improved, considering user-experience techniques and user evaluations. The storage of results
and versioning are complex features to be implemented, and no tool appraised attempts to do it.
Although 2BFAIR has the highest percentage, other tools can be evolved to improve the coverage
of their requirements. However, it requires technical expertise since their implementation does not
follow a framework architecture, one of the main characteristics of 2BFAIR, which aims to allow for
customization.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>This work presented 2BFAIR, a framework for automated FAIRness assessment. 2BFAIR was developed
as a framework to support customization of the FAIRness evaluation according to community
characteristics. It provides ready-to-use pieces of code (frozen-spot) that encapsulate complex logic common
to any FAIRness assessment process that the user does not have to change. On the other hand, 2BFAIR
provides semi-finished building blocks ( hot spots) that users may adjust to their particular needs. The
framework architecture allows creating tools based on 2BFAIR tailored to specific domains.</p>
      <p>We also provided a default implementation of 2BFAIR, i.e., an automated tool for FAIRness assessment
created with the 2BFAIR framework code. This implementation consists of evaluators for all the
principles, a structured configuration, and an implementation of a (meta)data collector. 2BFAIR users
can use this implementation directly to start executing FAIRness assessments. Technical users can use
it as an example of 2BFAIR instantiation to develop their tool based on our framework.</p>
      <p>We analyzed this default implementation of 2BFAIR against the state-of-the-practice tools based on
a set of desirable requirements that tools for automated FAIRness evaluation should fulfill. 2BFAIR
reached the highest percentage of fulfillment (87%), while other tools reached at most 74%. No tool
meets all the requirements, so none stands out as the state-of-the-art. 2BFAIR is a good starting point
since it was implemented as a framework and provides a default implementation for direct use. However,
we emphasize that to make a tool choice, one should consider not only the percentage of fulfillment but
also understand the more critical requirements for their specific scenario, the characteristics of each
tool, and the dificulties in improving each tool’s implementations in case it becomes vital to support
requirements not adequately addressed.</p>
      <p>As future work, we will evolve the 2BFAIR framework to support the three missing requirements, i.e.,
implement functionalities to handle authentication (R11), to store the evaluation results in a searchable
resource (R22), and support FAIRness assessments versioning (R23). We will evaluate the 2BFAIR
framework with users to generate applications for real scenarios from diferent domains. Another
proposal of work is to evaluate how we can automatically generate code for the hot spots, e.g., based on
the configuration file, templates, or using code assistants (like the IBM Watsonx Code Assistant 25).</p>
      <p>In this work, we evaluated how the tools comply with requirements that automated tools for FAIRness
assessment of digital objects should meet. From another perspective, we could analyze how the tools
themselves abide by the FAIR principles, e.g., by examining how they meet the FAIR for research
software (FAIR4RS) [45].</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <sec id="sec-5-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>25https://www.ibm.com/products/watsonx-code-assistant
Principles, in: International Semantic Web Conference (ISWC) 2021: Posters, Demos, and Industry
Tracks, volume 2980 of CEUR Workshop Proceedings, CEUR-WS.org, 2021, pp. 1–4.
[23] E. González, A. Benítez, D. Garijo, FAIROs: Towards FAIR assessment in research objects,
in: Linking Theory and Practice of Digital Libraries, Springer, 2022, pp. 68–80. doi:10.1007/
978-3-031-16802-4_6.
[24] CLARIN, Clarin curation dashboard, https://github.com/clarin-eric/curation-dashboard, 2025.
[25] CLARIN, Curation dashboard 7.0.0, https://curation.clarin.eu/, Accessed on 2025–04-10.
[26] A. Czerniak, Lightweight FAIR assessment in the OpenAIRE Validator, 2021. doi:10.5281/zenodo.</p>
        <p>5541133.
[27] OpenAIRE, OpenAIRE’s Repository Manager, https://provide.openaire.eu/home, Accessed on
2025-04-10.
[28] J. Wang, S. Gesing, R. Johnson, N. Meyers, D. Minor, PresQT, https://github.com/</p>
        <p>Lucy-Family-Institute/presqt, Accessed on 2025–04-10.
[29] J. Wang, S. Gesing, R. Johnson, N. Meyers, D. Minor, PresQT Data and Software Preservation</p>
        <p>Quality Tool Project, https://osf.io/d3jx7/, 2022. doi:10.17605/OSF.IO/D3JX7.
[30] J. Menke, P. Eckmann, I. B. Ozyurt, M. Roelandse, N. Anderson, J. Grethe, A. Gamst, A. Bandrowski,
Establishing Institutional Scores With the Rigor and Transparency Index: Large-scale Analysis of
Scientific Reporting Quality, Journal of Medical Internet Research 24 (2022). doi: 10.2196/37324.
[31] Sciscore, SciScore: Enhance rigor and reproducibility in scientific research, https://sciscore.com,</p>
        <p>Accessed on 2025–04-10.
[32] J. Wang, S. Gesing, R. Johnson, N. Meyers, D. Minor, Presqt, https://presqt.readthedocs.io/en/latest/
services.html, Accessed on 2025–04-10.
[33] C. Sun, V. Emonet, M. Dumontier, A comprehensive comparison of automated FAIRness evaluation
tools, in: CEUR Workshop Proceedings, volume 3127, 2022, pp. 1–10.
[34] N. Krans, A. Ammar, P. Nymark, E. Willighagen, M. Bakker, J. Quik, FAIR assessment tools:
evaluating use and performance, NanoImpact 27 (2022). doi:10.1016/j.impact.2022.100402.
[35] K. Peters-Von Gehlen, H. Höck, A. Fast, D. Heydebreck, A. Lammert, H. Thiemann,
Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of
Evaluation Tools, Data Science Journal 21 (2022). doi:10.5334/dsj-2022-007.
[36] D. Slamkov, V. Stojanov, B. Koteska, A. Mishev, A comparison of data FAIRness evaluation tools,
in: CEUR Workshop Proceedings, volume 3237, CEUR-WS, 2022, pp. 1–12.
[37] L. Richardson, M. Amundsen, S. Ruby, RESTful web APIs: services for a changing world, O’Reilly</p>
        <p>Media Inc.„ 2013.
[38] J. M. Aronsen, O. Beyan, N. Harrower, A. Holl, et al., Recommendations on FAIR metrics for EOSC,</p>
        <p>Publications Ofice of the European Union, LU, 2021. doi: 10.2777/70791.
[39] A. Devaraju, R. Huber, M. Mokrane, P. Herterich, L. Cepinskas, J. de Vries, H. L’Hours, J. Davidson,
A. White, FAIRsFAIR Data Object Assessment Metrics, Technical Report, Zenodo, 2022. doi:10.
5281/zenodo.6461229.
[40] M. Wilkinson, et al., The FAIR Maturity Evaluation Service, https://fairsharing.github.io/</p>
        <p>FAIR-Evaluator-FrontEnd, Accessed on 2025–04-10.
[41] L. Ehrlinger, W. Wöß, Towards a definition of knowledge graph, SEMANTICS 2016: Posters and</p>
        <p>Demos Track 48 (2016) 1–2.
[42] T. Rosnet, A. Gaignard, M.-D. Devignes, FAIR-checker, https://fair-checker.france-bioinformatique.</p>
        <p>fr/, Accessed on 2025–04-10.
[43] RDA, FAIR Data Maturity Model: specification and guidelines, 2020. doi: 10.15497/rda00050.
[44] G. Mosconi, Q. Li, D. Randall, H. Karasti, P. Tolmie, J. Barutzky, M. Korn, V. Pipek, Three
Gaps in Opening Science, Computer Supported Cooperative Work (CSCW) 28 (2019) 749–789.
doi:10.1007/s10606-019-09354-z.
[45] M. Barker, N. P. Chue Hong, D. S. Katz, A.-L. Lamprecht, C. Martinez-Ortiz, F. Psomopoulos,
J. Harrow, L. J. Castro, M. Gruenpeter, P. A. Martinez, T. Honeyman, Introducing the FAIR Principles
for research software, Scientific Data 9 (2022) 622. doi: 10.1038/s41597-022-01710-x.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumontier</surname>
          </string-name>
          , et al.,
          <article-title>The fair guiding principles for scientific data management and stewardship</article-title>
          ,
          <source>Scientific data 3</source>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . doi:
          <volume>10</volume>
          .1038/sdata.
          <year>2016</year>
          .
          <volume>18</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jacobsen</surname>
          </string-name>
          , R. de Miranda Azevedo,
          <string-name>
            <given-names>N.</given-names>
            <surname>Juty</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          et al.,
          <source>FAIR Principles: Interpretations and Implementation Considerations, Data Intelligence</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <fpage>10</fpage>
          -
          <lpage>29</lpage>
          . doi:
          <volume>10</volume>
          .1162/dint_r_
          <fpage>00024</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Trojahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kamel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Annane</surname>
          </string-name>
          , et al.,
          <article-title>A FAIR core semantic metadata model for FAIR multidimensional tabular datasets</article-title>
          ,
          <source>in: Knowledge Engineering and Knowledge Management</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>174</fpage>
          -
          <lpage>181</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -17105-5_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Amdouni</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Jonquet, FAIR or FAIRer? an integrated quantitative fairness assessment grid for semantic resources and ontologies</article-title>
          ,
          <source>in: Communications in Computer and Information Science</source>
          , volume
          <volume>1537</volume>
          CCIS,
          <year>2022</year>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -98876-
          <issue>0</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>J. van Soest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choudhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gaikwad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sloep</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dekker</surname>
          </string-name>
          ,
          <article-title>Annotation of existing databases using Semantic Web technologies: making data more FAIR</article-title>
          ,
          <source>in: 12th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>94</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Amdouni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bouazzouni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jonquet</surname>
          </string-name>
          ,
          <string-name>
            <surname>O'</surname>
          </string-name>
          <article-title>FAIRe: Ontology fairness evaluator</article-title>
          , https://github.com/ agroportal/fairness, Accessed on 2025-
          <volume>04</volume>
          -10.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumontier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-A.</given-names>
            <surname>Sansone</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. O.</surname>
          </string-name>
          <article-title>Bonino da Silva Santos</article-title>
          , et al.,
          <string-name>
            <surname>Evaluating</surname>
            <given-names>FAIR</given-names>
          </string-name>
          <article-title>maturity through a scalable, automated, community-governed framework</article-title>
          ,
          <source>Scientific data 6</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1038/s41597-019-0184-5.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Aguilar Gómez</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bernal</surname>
          </string-name>
          , FAIR EVA:
          <article-title>Bringing institutional multidisciplinary repositories into the FAIR picture</article-title>
          ,
          <source>Scientific Data</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1038/s41597-023-02652-8.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gaignard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rosnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>De Lamotte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lefort</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-D. Devignes</surname>
          </string-name>
          , FAIR-Checker:
          <article-title>supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards</article-title>
          ,
          <source>Journal of Biomedical Semantics</source>
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . doi:
          <volume>10</volume>
          .1186/s13326-023-00289-5.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Markiewicz</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. J. P. de Lucena</surname>
          </string-name>
          ,
          <article-title>Object oriented framework development</article-title>
          ,
          <source>XRDS</source>
          <volume>7</volume>
          (
          <year>2001</year>
          )
          <fpage>3</fpage>
          -
          <lpage>9</lpage>
          . URL: https://doi.org/10.1145/372765.372771. doi:
          <volume>10</volume>
          .1145/372765.372771.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W.</given-names>
            <surname>Pree</surname>
          </string-name>
          ,
          <article-title>Meta patterns - A means for capturing the essentials of reusable object-oriented design</article-title>
          , in: M.
          <string-name>
            <surname>Tokoro</surname>
          </string-name>
          , R. Pareschi (Eds.),
          <string-name>
            <surname>Object-Oriented</surname>
            <given-names>Programming</given-names>
          </string-name>
          , Springer, Berlin, Heidelberg,
          <year>1994</year>
          , pp.
          <fpage>150</fpage>
          -
          <lpage>162</lpage>
          . doi:
          <volume>10</volume>
          .1007/BFb0052181.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Azevedo</surname>
          </string-name>
          , G. Banaggia,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tesolin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cerqueira</surname>
          </string-name>
          ,
          <article-title>Analysis of automated tools for FAIRness evaluation: A literature perspective</article-title>
          ,
          <source>in: The Semantic Web: ESWC 2024 Satellite Events</source>
          , Springer Nature Switzerland,
          <year>2025</year>
          , pp.
          <fpage>149</fpage>
          -
          <lpage>166</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -78955-7_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carrera-Rivera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ochoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Larrinaga</surname>
          </string-name>
          , G. Lasa,
          <article-title>How-to conduct a systematic literature review: A quick guide for computer science research</article-title>
          , MethodsX
          <volume>9</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.mex.
          <year>2022</year>
          .
          <volume>101895</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Devaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <string-name>
            <surname>F-UJI - An Automated FAIR Data Assessment</surname>
            <given-names>Tool</given-names>
          </string-name>
          , Zenodo,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          . 5281/zenodo.4063720.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Devaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <article-title>An automated solution for measuring the progress toward FAIR research data</article-title>
          ,
          <source>Patterns</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.patter.
          <year>2021</year>
          .
          <volume>100370</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16] fairsharing, Fairassist.org, https://fairassist.org/, Accessed on 2025-
          <volume>04</volume>
          -10.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <article-title>Institute of Data Science</article-title>
          , FAIR Enough, https://fair-enough.semanticscience.org/, Accessed on 2025-
          <volume>04</volume>
          -10.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Aguilar Gómez</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bernal</surname>
          </string-name>
          ,
          <string-name>
            <surname>FAIR EVA</surname>
          </string-name>
          (Evaluator, Validator &amp; Advisor), https://github.com/ EOSC-synergy/FAIR_eva, Accessed on 2024-
          <volume>10</volume>
          -29.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>E.</given-names>
            <surname>Amdouni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bouazzouni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jonquet</surname>
          </string-name>
          ,
          <string-name>
            <surname>O'</surname>
          </string-name>
          <article-title>FAIRe: Ontology FAIRness evaluator in the agroportal semantic resource repository</article-title>
          ,
          <source>in: The Semantic Web: ESWC 2022 Satellite Events</source>
          , Springer, Cham,
          <year>2022</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>94</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -11609-4_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20] HowFAIRis, HowFAIRis, https://github.com/fair-software/howfairis, Accessed on 2025-
          <volume>04</volume>
          -10.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D.</given-names>
            <surname>Garijo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Corcho</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Poveda-Villalón, OEG FAIR Ontologies Assessment (FOOPS!)</article-title>
          , https: //github.com/oeg-upm/fair_ontologies, Accessed on 2025-
          <volume>04</volume>
          -10.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>D.</given-names>
            <surname>Garijo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Corcho</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Poveda-Villalón, FOOPS!: An Ontology Pitfall Scanner for the FAIR</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>