<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gwendal Jouneaux</string-name>
          <email>gwendal.jouneaux@list.lu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jordi Cabot</string-name>
          <email>jordi.cabot@list.lu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI Models</institution>
          ,
          <addr-line>Model Cards, Sustainability, Energy, Quality model, Domain-Specific Language</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Luxembourg Institute of Science and Technology</institution>
          ,
          <addr-line>Esch-sur-Alzette</addr-line>
          ,
          <country country="LU">Luxembourg</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Luxembourg</institution>
          ,
          <addr-line>Esch-sur-Alzette</addr-line>
          ,
          <country country="LU">Luxembourg</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>25</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>The growth of machine learning (ML) models and associated datasets triggers a consequent dramatic increase in energy costs for the use and training of these models. In the current context of environmental awareness and global sustainability concerns involving ICT, Green AI is becoming an important research topic. Initiatives like the AI Energy Score Ratings are a good example. Nevertheless, these benchmarking attempts are still to be integrated with existing work on Quality Models and Service-Level Agreements common in other, more mature, ICT subfields. This limits the (automatic) analysis of this model energy descriptions and their use in (semi)automatic model comparison, selection, and certification processes.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The large adoption of AI technologies, increase in model complexity, and dataset size has led to a
dramatic rise in the computational power required to run and train the models and their overall energy
cost and sustainability impact. In the last few years, the question of the carbon footprint of AI models
became a priority inside the research community. The seminal paper from Strubell et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] analyzed
the carbon impact for training four state-of-the-art NLP models. The resulting conclusion is that the
carbon footprint for training and using AI models should be reduced.
      </p>
      <p>
        While eforts to benchmark, monitor and fine-tune AI models in the context of Green AI has been
made [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], this information is usually not readily available for model users. Approaches such as Model
Cards [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] provide detailed information to model users, such as performance metrics or ethical concerns.
Yet, few approaches bridge the gap and present sustainability information. The most notable is the recent
AI Energy Score [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] proposed by HuggingFace. However, this energy score only provides information
on inference energy consumption, disregarding carbon emission, water usage or training impact.
      </p>
      <p>
        To address the aforementioned issue, this paper proposes the definition of Sustainability Model
Cards. Sustainability Model Cards complement the already existing Model Cards [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] with the additional
concern of sustainability of AI models. We propose a domain-specific language (DSL) to precisely define
these cards and pave the way for future automated use of this information.
      </p>
      <p>The rest of the paper is organized as follows. Section 2 reviews the state of the art on the modeling
of quality models for AI, formalism to describe AI models and related artifacts, and existing approaches
to make model users aware of sustainability aspects of the models. Sections 3 and Section 4 detail the
Sustainability Model Cards and associated DSL, respectively. Finally, Section 5 discusses future work
for Sustainability Model Cards in the form of a research roadmap, while Section 6 concludes the paper.
https://www.gwendal-jouneaux.fr/ (G. Jouneaux)</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the art</title>
      <p>
        Quality Assurances (QA) and related quality models have been identified as a challenge for current AI
software research [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the past years, the research community proposed multiple quality models [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
However, most of those models do not present a full picture of AI software quality. Among the
twenty-nine papers studied by Gezici et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], only three discuss sustainability as a relevant quality
aspect [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ].
      </p>
      <p>
        In addition to quality models, other formalisms to describe AI models or related artifacts have been
developed. For datasets, Dataset Cards [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] allows defining standard information such as provenance,
authorship, license, and tasks for which the dataset is suitable. DescribeML [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] additionally describes
social concerns potentially leading to bias and provides a dedicated language and tool support for its
specification. Finally, Croissant [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] uses a JSON notation that is both human-readable and compatible
with existing tools and frameworks, providing additional interoperability, portability, and discoverability
to datasets. For AI models, Mitchell et al. proposed Model Cards [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] used for model reporting. Model
Cards describe the AI model in terms of intended use, data, performances and ethical considerations,
among other things. However, none of those approaches allows the description of the sustainability
concern.
      </p>
      <p>
        On the other hand, approaches such as the ones from Hugging Face try to specifically assess and
describe sustainability aspects of machine learning models. They extended the Model Cards approach
with sustainability data [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] such as cloud provider location, training time, hardware, and estimated
carbon emissions. They also created the AI Energy Score Ratings [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to evaluate the energy cost when
using a model. This method compute the average inference cost (in Wh) over one thousand requests.
      </p>
      <p>
        However, these approaches neither provide a formal description that could be used to check the
syntactic and semantic correctness of the card information nor an easy way to automatically process
such information as part of a MLOps pipeline that should take into account energy concerns [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
Furthermore, while carbon emissions of the training phase and energy consumption of the inference
task are important metrics for model selection, our DSL covers a larger variety of energy-related
information, such as the water consumption or counter measures taken by the platforms (and platform
providers) to counter such energy impact. Finally, the use of DSL technology facilitates reusing many
other existing tools in the DSL realm to simplify the creation of new functionality (e.g. verbalization of
the card information) around this new Sustainability Model Card ecosystem.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Sustainability Model Cards</title>
      <p>
        To address the aforementioned problem, we propose Sustainability Model Cards, inspired by the
wellknown Model Cards [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] concept. In what follows, we describe the dimensions that are part of our
Sustainability definition based on an analysis of the relevant literature in this field. More specifically, we
group Sustainability information of a ML model into four main sections: Metadata, Training, Inference,
and Platform.
      </p>
      <sec id="sec-3-1">
        <title>Sustainability Model Card</title>
      </sec>
      <sec id="sec-3-2">
        <title>Metadata</title>
        <p>
          The Metadata section contains information about the model itself. This includes the identification
of the model through a name and version, the type of the model (e.g., decision tree, CNN, regression),
identification of the provider and license of the model. This information allows establishing the link
between the Sustainability Model Cards and already existing Model Cards for the same model to later
do some combined analysis. The model type is particularly important, as studies such as Yu et al. shows
potential correlation between model type and inference energy consumption [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          The Training section regroup information concerning the environmental impact of the training
phase of the model. More important aspects are the energy consumption, carbon emission and water
consumption resulting from this training phase. While carbon emissions and energy consumption are
already studied in the context of Green AI [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], water consumption is often overlooked, yet it is still
regarded as an important sustainability aspect for datacenters [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. To complement this information,
this section also includes a reference (represented italicized) to the definition of the platform used, and
the time spent to train the model (Which could be used to infer some energy consumption a posteriori
with a certain degree of approximation).
        </p>
        <p>The Inference section allows defining the impact of the diferent inference tasks a model can be
used for. For each task supported by the model (e.g., text generation, text summarization), this section
reports the inference task type, the average energy consumption, the average carbon emission, the
average water consumption, and a reference (represented italicized) to the platform used to compute
these estimations.</p>
        <p>Finally, the Platform section describes the platforms used to train or execute the model. This
description includes details about the hardware used, the region (e.g., Azure and EU-west), and the
energy mix used for the training (the ratio of renewable and fossil energy used), as these aspects are
useful when choosing a deployed model infrastructure. In addition, carbon ofsetting has become a
common practice for platform provider to reduce their environmental impact. Carbon ofset credits that
represent the quantity of greenhouse gas emission avoided or countered by other means (e.g., financing
renewable energy projects). In the Sustainability Model Cards, carbon ofset credits can be described
either as a quantity (in  2 ) ofsetting the platform emission, or as a percentage of the emissions
ofset afterward, as typically done by cloud platform providers such as Amazon, Google or Microsoft.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Description of a DSL for ML Sustainability reporting</title>
      <p>This section describes the proposed domain-specific language (DSL) to support the definition of
Sustainability Model Cards, including all dimensions mentioned in the previous section. A DSL is a language
specially designed to model apps for a certain domain (health, finance, or, such as in this case, AI).</p>
      <p>Once described with our DSL, the Sustainability information can be automatically processed as
part of a ML workflow or MLOps approach ( e.g., to search for sustainable alternatives, compare the
sustainability of equivalent models, analysis of a set of models with a energy perspective, ...).</p>
      <p>Each DSL has two main components:
• Abstract syntax: Describing the structure of the language and the way diferent language
primitives can be combined, independently of any particular representation.
• Concrete syntax. Describes one or more specific notations for the language, covering the encoding
and/or the visual/textual appearance of the elements in the abstract syntax</p>
      <p>As we will see, our DSL uses a YAML based concrete syntax to facilitate the adoption of the language
and its integration with the existing Model Cards of the Hugging Face repository.</p>
      <sec id="sec-4-1">
        <title>4.1. Abstract Syntax of the Language</title>
        <p>Figure 1 presents the abstract syntax of the proposed DSL in the form of a metamodel (i.e. the schema
or grammar of the language expressed using an object-oriented perspective) structuring the language
elements, the properties of every element and the possible relationships among them.</p>
        <p>Let’s describe in more details the diferent elements of the language. The SustainabilityModelCard
class is the root concept representing the whole Sustainability Model Card. This card is composed of
three subcomponents: MetaData, Training and Inference.</p>
        <p>The MetaData class represents the metadata section of the card defining the name, version, model
type, provider, and license as strings.</p>
        <p>
          The Training class models the training section of the card and defines the duration of the training.
The Inference class encompasses all the inference tasks (represented by the Task class) addressed
by the model. Each Task defines the inference type, based on the list used in the AI Energy Score [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
Both the Training and Task classes represent computations that have an environmental impact. This
is materialized through their inheritance from the Computation abstract class. This class models the
impact of the computation when executed on a given platform. The impact is materialized through
associations to water consumption, carbon emissions and energy consumption metrics. These three
metrics are reified in their own class and are represented through a value and its associated unit. Finally,
the Computation a timestamp denoting the moment of the measurements.
        </p>
        <p>In addition, a Computation is associated to a Platform. When reporting Training impact, the
Platform represents the infrastructure used to train the model. While for Task impact, the Platform
represents the infrastructure used when computing the metrics. The Platform class is characterized
by a name, hardware description, provider and compute region. The name is used by computations
to reference their platform in the textual notation. The hardware description, provider and compute
region are used to describe the hardware used and its provider (i.e., local or cloud provider). For
a more fine-grained representation of the platform carbon impact, the Platform is associated to a
CarbonOffsetCredit and a set of EnergySource. The CarbonOffsetCredit represent either the
quantity of greenhouse gas emission mitigated in number of credits (one credit is 10002 ), or
the percentage of the emissions mitigated afterward. The energy sources used by the platform are
represented in the form of an energy mix, denoting the ratio of energy provided by a source. The
EnergySource is characterized by the type of energy and carbon eficiency.</p>
        <p>While Training and Task classes are mostly identical, we explicitly make a distinction based on
their semantic diference and intent to have a more fine-grained description of the training phase, as
presented in Section 5.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Concrete Syntax of the Language</title>
        <p>The concrete syntax used to specify the Sustainability Model Card is based on YAML. A YAML structure
is represented as a set of key-value pairs, where the value can be of three types: Scalar, Sequence, and
Mapping. Scalar values are represented as a series of zero or more characters. Sequence values are
represented as a list of values. Finally, mapping values are represented as a set of key-value pairs.</p>
        <p>To encode Sustainability Model Cards in this format, we defined a set of rules.
1. Class instances are represented using the class name in snake case as key and a mapping as value
2. Attributes are part of instances mapping and represented using the attribute name in snake case
as key and a scalar as value
3. Compositions are defined by having the composed class instance nested in the containing class
mapping
4. Multiplicity higher than one is managed using YAML sequence
5. Simple associations are defined using the associated class name in snake case as key and the
object name attribute as scalar value</p>
        <p>In addition, there are three special cases to these rules: Platform and EnergySource, Inference,
and EnergyMix. Even if platforms and energy sources are not direct components of the Sustainability
Model Card, they are defined as a list contained in the card (See Listing ?? platforms and energy_sources
lists). As Inference only contains the list of tasks, the intermediate “task” attribute containing the
list is bypassed, making the YAML inference section a sequence. Finally, the EnergyMix association
class is represented as a sequence of EnergyMix class instances containing the ratio as attribute and the
EnergySource class instance.
1 sustainability_model_card:
2 meta_data:
3 name: GPT-3 175B
4 model_type: LLM
5 provider: OpenAI
6 platforms:
7 - platform:
8 name: Infrastructure
9 hardware: Multiple V100
10 provider: Microsoft Azure
11 region: US
12 carbon_offset_credit:
13 value: 100.0
14 unit: PERCENTAGE
15 energy_mix:
16 - energy_mix:
17 ratio: 100.0
18 energy_source: Azure US
19 energy_sources:
20 - energy_source:
21 name: Azure US
22 type: Fossil
23 co2_per_kWh: 0.3496 0.4759
24 unit: kgCO2eq
Listing 1: Sustainability Model Card of a GPT-3 175B model trained and used on Microsoft Azure
infrastructure based in the United States</p>
        <p>To demonstrate Sustainability Model Cards, we provide Listing 1 as a concrete example using our
DSL syntax to report the sustainability aspects of the GPT-3 model with 175 billion parameters.</p>
        <p>
          The infrastructure described is using Microsoft Azure and multiple V100 GPUs as described in the
related paper [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Data on water and energy consumption of the model comes from the study conducted
by Li et al. [19] on GPT-3 training water consumption. These metrics where based on the average
consumption across Azure datacenters in the US. Consequently, we used the average carbon eficiency
in the US reported in eGRID 2023 Data provided by the US Environmental Protection Agency. Based
on the consumption and carbon eficiency, we were able to compute the carbon emission for both the
training phase and the text generation inference.
        </p>
        <p>By processing this card, we observe that the inference carbon emissions exceed the training emissions
after roughly 325 million inferences. In a context where the number of expected inferences across the
model lifecycle is vastly superior to this number, this model could be compared to other based on the
inference emissions, as its impact on the overall emission is greater.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Tool support</title>
        <p>To support the formal definition of Sustainability Model Cards, we provide a Python implementation of
our DSL. This implementation is composed of a validating parser and a set of classes implementing the
metamodel, and is available in open-source on GitHub1.</p>
        <p>To implement the metamodel, we relied on the BESSER Low-Code platform [20] to create the DSL
metamodel and generate a Python implementation of it. The Python classes generated can then be used
to instantiate any model conforming to the specified metamodel, allowing its manipulation or creation
by other tools. For instance, BESSER ofers both a language to specify and generate implementation
of neural networks [21], and a deployment language [22]. In the future, BESSER could use this
infrastructure to automatically benchmark specified neural networks and generate their Sustainability
Model Card as part of the design and execution process of the network.</p>
        <p>
          In addition, we have implemented a parser validating and transforming the YAML description to
model instances. First, we use an existing YAML parser implementation, transforming the textual
description into a manipulable Python object. Then, we traverse the object structure to assess the
conformance of the structure to the metamodel. These validation checks ensure: (1) the presence of
units when required, (2) the correspondence of these units to the ones defined in the metamodel, (3) the
correspondence of the inference and energy types to the ones defined in the metamodel, and (4) that
the values representing percentage are bound to the [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] interval. Finally, the validated structure is
transformed to a corresponding model instance using the metamodel classes.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Research Roadmap</title>
      <p>Our DSL is a first step towards a more ambitious goal towards the adoption, formalization, analysis
and improvement of AI sustainability. In this section, we discuss a research roadmap and possible next
steps for the evolution of the Sustainability Model Cards initiative, including a number of application
scenarios. We hope to extend and prioritize this list based on discussions with the community.</p>
      <p>
        Extending the coverage and granularity of the Sustainability DSL. When creating the
Sustainability Model Cards and associated DSL, we aimed to be as complete as possible by making the
union of sustainability concepts mentioned in the surveyed papers [
        <xref ref-type="bibr" rid="ref1 ref14 ref16 ref17 ref4">1, 23, 14, 4, 24, 16, 25, 26, 17</xref>
        ]. Yet,
more details could be added such as a more granular description of the training phase diving in the
pre-training and fine-tuning part of the training, the hyperparameters values, or the dataset used.
Furthermore, the Sustainable AI research field is highly active and propose new frameworks and new
metrics that should be included in Sustainability Model Cards along the way. As expressed in the recent
article of Cruz et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], there is a need for standardized metrics in the evaluation of AI sustainability,
and our framework should evolve to match (and possibly, influence) these upcoming metrics.
      </p>
      <sec id="sec-5-1">
        <title>1Implementation: https://www.gwendal-jouneaux.fr/SustainabilityModelCards-Parser</title>
        <p>Graphical notation. When using a language, diferent users prefers diferent syntaxes, also
depending on their technical profile. For instance, less technical users tend to prefer more graphical
notations. For this reason, we plan to extend the language with other concrete syntaxes, such as a
graphical notation, and even a conversational-based one, for input and a verbalization in Controlled
English as output to help all types of users to write and read sustainability cards.</p>
        <p>Tighter integration with Model Cards. Sustainability Model Cards focus only on the sustainability
aspect of AI models. One of the ways to have a complete view of the model data would be through
the integration of this card with other existing cards. A first step in this direction has already been
made, as our DSL concrete syntax has been built on YAML to integrate seamlessly with the existing
Hugging Face model cards. The next step in this direction would be to propose a formal description
language integrating both cards, allowing automatic processing using all the available information.
Another direction would be to extend the sustainability concern with ethical concern to allow more
transparency in the ethical and environmental sustainability impact of AI models [27].</p>
        <p>Analyzing impact on model users. An additional aspect of these cards is the social aspect. The
ifnal choice of using the model with the most accuracy, the least carbon emissions, or something
in the middle belongs to the model user. Conducting a user study on how they decide on a model,
especially among models that ofer similar features and performance, would allow assessing the impact
of providing more sustainability data in the choice of model users.</p>
        <p>Application on diferent scenarios . The precise models of the Sustainability Model Cards resulting
from the use of our DSL allow for the automatic processing of the information reported in the cards.
This can be useful for a range of diferent scenarios, including many MLOps ones. For instance, a
ifrst scenario envisioned is to perform automatic model selection based on the model environmental
impact. Another scenario would be to optimize model deployment based on an impact analysis using
information on locations’ energy providers and carbon eficiency, energy consumption of inference,
and hardware. Finally, the provided information could be used and monitored at runtime to enforce
sustainability aware Service Level Agreements (SLA) that could be established between the users and
the model providers as typically done for other types of IT services following existing quality models.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we have presented the Sustainability Model Cards, a new DSL allowing the description of AI
models sustainability aspects including energy consumption, carbon emissions and water consumption.
This DSL enables the formal definition of this information and facilitates the automatic processing of
sustainability information as part of a MLOps pipeline while still exporting it as an extension of the
Models Card formalism for better readability and integration with similar initiatives.</p>
      <p>As further work, we plan to work on the aspects discussed as potential future roadmap for this
initiative.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work has been funded by the European Union under the Grant Agreement No 101189664 (MOSAICO
project). Views and opinions expressed are those of the author(s) only and do not necessarily reflect
those of the European Union or the European Health and Digital Executive Agency (HADEA). Neither
the European Union nor the granting authority can be held responsible for them.</p>
      <p>Jordi Cabot is supported by the Luxembourg National Research Fund (FNR) PEARL program, grant
agreement 16544475</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark,
C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language models are few-shot
learners, 2020. URL: https://arxiv.org/abs/2005.14165. arXiv:2005.14165.
[19] P. Li, J. Yang, M. A. Islam, S. Ren, Making ai less ”thirsty”: Uncovering and addressing the secret
water footprint of ai models, 2025. URL: https://arxiv.org/abs/2304.03271. arXiv:2304.03271.
[20] I. Alfonso, A. Conrardy, A. Sulejmani, A. Nirumand, F. Ul Haq, M. Gomez-Vazquez, J.-S. Sottet,
J. Cabot, Building BESSER: an open-source low-code platform, in: International Conference on
Business Process Modeling, Development and Support, Springer, 2024, pp. 203–212.
[21] N. Daoudi, I. Alfonso, J. Cabot, Modelling neural network models, in: International Conference
on Research Challenges in Information Science, Springer, 2025, pp. 130–139.
[22] F. Ul Haq, I. Alfonso, A. Sulejmani, J. Cabot, Extending a low-code tool with multi-cloud deployment
capabilities, in: European Conference on Software Architecture, Springer, 2024, pp. 39–46.
[23] E. Strubell, A. Ganesh, A. McCallum, Energy and policy considerations for modern deep learning
research, in: Proceedings of the AAAI conference on artificial intelligence, volume 34, 2020, pp.
13693–13696.
[24] A. Guldner, S. Kreten, S. Naumann, Exploration and systematic assessment of the resource
eficiency of machine learning, in: INFORMATIK 2021, Gesellschaft für Informatik, Bonn, 2021,
pp. 287–299.
[25] L. F. W. Anthony, B. Kanding, R. Selvan, Carbontracker: Tracking and predicting the carbon
footprint of training deep learning models, arXiv preprint arXiv:2007.03051 (2020).
[26] E. García-Martín, C. F. Rodrigues, G. Riley, H. Grahn, Estimation of energy consumption in
machine learning, Journal of Parallel and Distributed Computing 134 (2019) 75–88.
[27] A. S. Luccioni, G. Pistilli, R. Sefala, N. Moorosi, Bridging the gap: Integrating ethics and
environmental sustainability in ai research and practice, 2025. URL: https://arxiv.org/abs/2504.00797.
arXiv:2504.00797.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Strubell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ganesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McCallum</surname>
          </string-name>
          ,
          <article-title>Energy and policy considerations for deep learning in NLP</article-title>
          , in: A.
          <string-name>
            <surname>Korhonen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Traum</surname>
          </string-name>
          , L. Màrquez (Eds.),
          <article-title>Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Florence, Italy,
          <year>2019</year>
          , pp.
          <fpage>3645</fpage>
          -
          <lpage>3650</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P19</fpage>
          - 1355.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Verdecchia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sallou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <article-title>A systematic review of green ai</article-title>
          ,
          <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <article-title>e1507</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vasserman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hutchinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Spitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Raji</surname>
          </string-name>
          , T. Gebru,
          <article-title>Model cards for model reporting</article-title>
          ,
          <source>in: Proceedings of the conference on fairness, accountability, and transparency</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>220</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] HuggingFace,
          <source>AI Energy Score</source>
          ,
          <year>2025</year>
          . URL: https://huggingface.github.io/AIEnergyScore, [Online; accessed 26.
          <source>May</source>
          <year>2025</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramler</surname>
          </string-name>
          ,
          <article-title>Quality assurance for ai-based systems: Overview and challenges</article-title>
          ,
          <source>in: Software Quality: Future Perspectives on Software Engineering Quality: 13th International Conference, SWQD 2021</source>
          , Vienna, Austria, January
          <volume>19</volume>
          -
          <issue>21</issue>
          ,
          <year>2021</year>
          , Proceedings 13, Springer,
          <year>2021</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gezici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Tarhan</surname>
          </string-name>
          ,
          <article-title>Systematic literature review on software quality for ai-based software</article-title>
          ,
          <source>Empirical Software Engineering</source>
          <volume>27</volume>
          (
          <year>2022</year>
          )
          <fpage>66</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Yap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A. A.</given-names>
            <surname>Ghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zulzalil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. I.</given-names>
            <surname>Admodisastro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Najafabadi</surname>
          </string-name>
          ,
          <article-title>A systematic mapping of quality models for ai systems, software and components</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>8700</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pons</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ozkaya</surname>
          </string-name>
          ,
          <article-title>Priority quality attributes for engineering ai-enabled systems</article-title>
          , arXiv preprint arXiv:
          <year>1911</year>
          .
          <volume>02912</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Siebert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Joeckel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nakamichi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ohashi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Namba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yamamoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aoyama</surname>
          </string-name>
          ,
          <article-title>Towards guidelines for assessing qualities of machine learning systems</article-title>
          ,
          <source>in: Quality of Information and Communications Technology: 13th International Conference, QUATIC</source>
          <year>2020</year>
          , Faro, Portugal, September 9-
          <issue>11</issue>
          ,
          <year>2020</year>
          , Proceedings 13, Springer,
          <year>2020</year>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Horkof</surname>
          </string-name>
          ,
          <article-title>Non-functional requirements for machine learning: Challenges and new directions, in: 2019 IEEE 27th international requirements engineering conference</article-title>
          (RE), IEEE,
          <year>2019</year>
          , pp.
          <fpage>386</fpage>
          -
          <lpage>391</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>HuggingFace</surname>
          </string-name>
          , Dataset Cards,
          <year>2025</year>
          . URL: https://huggingface.co/docs/hub/en/datasets-cards,
          <source>[Online; accessed 26. May</source>
          <year>2025</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Giner-Miguelez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gómez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cabot</surname>
          </string-name>
          ,
          <article-title>A domain-specific language for describing machine learning datasets</article-title>
          ,
          <source>Journal of Computer Languages</source>
          <volume>76</volume>
          (
          <year>2023</year>
          )
          <fpage>101209</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Akhtar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Benjelloun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Conforti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Foschini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Giner-Miguelez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gijsbers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Karamousadakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuchnik</surname>
          </string-name>
          , et al.,
          <article-title>Croissant: A metadata format for ml-ready datasets</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>37</volume>
          (
          <year>2024</year>
          )
          <fpage>82133</fpage>
          -
          <lpage>82148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <article-title>HuggingFace, CO2 Emissions and the Hugging Face Hub: Leading the Charge</article-title>
          ,
          <year>2022</year>
          . URL: https: //huggingface.co/blog/carbon
          <article-title>-emissions-on-the-</article-title>
          <string-name>
            <surname>hub</surname>
          </string-name>
          ,
          <source>[Online; accessed 26. May</source>
          <year>2025</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Fernandes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Kirkeby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Martínez-Fernández</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sallou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Anwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. B.</given-names>
            <surname>Roque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bogner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Castaño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Castor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chasmawala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cunha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Feitosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jedlitschka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lago</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Muccini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Oprescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Saraiva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sarro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Selvan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vaidhyanathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verdecchia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. P.</given-names>
            <surname>Yamshchikov</surname>
          </string-name>
          ,
          <article-title>Greening ai-enabled systems with software engineering: A research agenda for environmentally sustainable ai practices</article-title>
          ,
          <year>2025</year>
          . URL: https://arxiv.org/abs/2506.01774. arXiv:
          <volume>2506</volume>
          .
          <fpage>01774</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>J.-R. Yu</surname>
            ,
            <given-names>C.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , T.-W. Huang,
            <given-names>J.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>C.-R.</given-names>
          </string-name>
          <string-name>
            <surname>Chung</surname>
          </string-name>
          , T.-W. Lin,
          <string-name>
            <surname>M.-H. Wu</surname>
            ,
            <given-names>Y.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Tseng</surname>
          </string-name>
          , H.-Y. Wang,
          <article-title>Energy eficiency of inference algorithms for clinical laboratory data sets: green artificial intelligence study</article-title>
          ,
          <source>Journal of Medical Internet Research</source>
          <volume>24</volume>
          (
          <year>2022</year>
          )
          <article-title>e28036</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ristic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Madani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Makuch</surname>
          </string-name>
          ,
          <article-title>The water footprint of data centers</article-title>
          ,
          <source>Sustainability</source>
          <volume>7</volume>
          (
          <year>2015</year>
          )
          <fpage>11260</fpage>
          -
          <lpage>11284</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbert-Voss</surname>
            , G. Krueger,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Henighan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Ramesh,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>