<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Techno/Ecofeminism in Action: Fair and Responsible Resource Allocation for Sustainable Data Science Pipelines</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Genoveva Vargas-Solar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José-Luis Zechinelli-Martini</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio A. Ardagna</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicola Bena</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadia Bennani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barbara Catania</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Javier A. Espinosa-Oviedo</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chirine Ghedira Guégan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Mauri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CNRS</institution>
          ,
          <addr-line>Univ Lyon, INSA Lyon, UCBL, LIRIS, UMR5205, F-69221</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, Università degli Studi di Milano</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Fundación Universidad de las Américas-Puebla</institution>
          ,
          <addr-line>Exhacienda Sta. Catarina Mártir s/n 72820 San Andrés Cholula</addr-line>
          ,
          <country country="MX">Mexico</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Univ. Lyon</institution>
          ,
          <addr-line>Univ. Lyon 1, UR ERIC, EA 3083, Villeurbanne</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Università degli Studi di Genova</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper introduces a dispatching approach to allocating computing resources for executing various activities within data science pipelines. The allocation strategy incorporates quantitative metrics-such as workload, performance in time, and memory consumption-and qualitative metrics emphasising fairness, responsibility, and sustainability. These qualitative considerations include the geographic location of servers, their CO2 footprint, the frugality of data processing and analytics models, the conditions under which the data are produced, and the expected collective benefit of the processing outcomes. By integrating these qualitative metrics into resource-dispatching strategies and decision-making processes, the proposed algorithm aims to transform the execution of data science pipelines into a more ethical and equitable practice. This approach aligns with the principles of techno- and ecofeminism, advocating for technological solutions that prioritize collective social and environmental progress over purely capitalist gains. In this context, techno/ecofeminism provides a critical lens, emphasizing the importance of inclusivity, sustainability, and shared benefits in developing and deploying data-driven technologies. This work challenges extractive and inequitable models by grounding the dispatching strategy in these principles, proposing an alternative framework that leverages technology for holistic and equitable advancement.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Data science pipelines</kwd>
        <kwd>Fair dispatching</kwd>
        <kwd>Fairness index</kwd>
        <kwd>Negotiation</kwd>
        <kwd>Techno/ecofeminism</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>This paper introduces a dispatching approach to allocate
computing resources for executing various activities within
data science pipelines. The allocation strategy
incorporates quantitative metrics —such as workload, performance
in time, and memory consumption— and qualitative
metrics emphasising fairness, responsibility, and
sustainability. These qualitative considerations include the geographic
location of computing resources, their CO2 footprint, the
frugality of data processing and analytics models, the
conditions under which the data are produced, and the expected
collective benefit of the processing outcomes. By integrating
these qualitative metrics into resource-dispatching
strategies and decision-making processes, the proposed approach
aims to transform the execution of data science pipelines
into a more fair and responsible practice. This approach
aligns with the principles of techno and ecofeminism,1
advocating for technological solutions prioritising collective
social and environmental progress over purely capitalist
gains. In this context, techno/ecofeminism provide a critical
lens, emphasizing the importance of inclusivity,
sustainability, and shared benefits in developing and deploying
data-driven technologies. This work challenges extractive
and inequitable models by grounding the dispatching
strategy in these principles, proposing an alternative framework
that leverages technology for holistic and equitable
advancement.</p>
      <p>Accordingly, the remainder of the paper is organised as
follows. Section 2 gives a general overview of fair computing
resource dispatching strategies for executing data science
workloads. Section 3 introduces our dispatching approach.
Section 4 reports an experimental validation to test our
approach in diferent scenarios. Finally, Section 5 concludes
the paper and discusses future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>Resource dispatching involves allocating computing
resources such as CPU, GPU, memory, and storage to specific
tasks in an environment, often under constraints like time,
budget, and quality of service (QoS). Traditional cluster
schedulers such as Kubernetes and Apache Mesos allocate
resources in containerized environments, emphasizing
scalability and fault tolerance. Public cloud platforms like AWS,
Azure, and Google Cloud ofer elasticity and on-demand
provisioning but require algorithms to manage dynamic pricing
and preemption risks. Emerging paradigms prioritize
resource dispatching across distributed and geographically
dispersed devices, focusing on latency and energy eficiency.</p>
      <p>The eficient dispatching of computing resources is a
critical concern in modern computing environments,
particularly for data science workloads characterized by diverse and
resource-intensive operations. Data science tasks
composing data science pipelines, ranging from data preprocessing
to machine learning model training, require dynamic
resource allocation to optimize execution time, ensure cost
eficiency, and meet fairness criteria. This section explores
key strategies, algorithms, and fairness considerations in
resource dispatching for data science workloads.</p>
      <sec id="sec-2-1">
        <title>2.1. Algorithms and Strategies for Resource</title>
      </sec>
      <sec id="sec-2-2">
        <title>Dispatching</title>
        <p>Numerous algorithms and strategies have been proposed
to optimize resource dispatching. These algorithms can
broadly be categorized into heuristic, optimization-based,
and learning-based approaches.</p>
        <sec id="sec-2-2-1">
          <title>Heuristic-Based Algorithms provide computationally</title>
          <p>
            eficient, rule-based strategies for dispatching resources.
Common heuristics include:
• First-Come-First-Served (FCFS): Tasks are executed
in the order of arrival. While simple, FCFS often
leads to resource starvation and ineficient
utilization.
• Round-Robin (RR): Resources are evenly distributed
among tasks in cyclic order, avoiding starvation but
may not optimize resource usage.
• Min-Min and Max-Min: Min-Min first selects tasks
with the lowest resource demands, while Max-Min
prioritizes those with the highest. These methods
aim to balance workloads but may overlook fairness.
Optimization-Based Algorithms model resource
dispatching as mathematical problems seeking to minimize or
maximize an objective function. Examples include:
• Linear Programming (LP): LP has been applied to
model resource allocation problems in environments
like high-performance computing clusters [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
• Integer Programming (IP): Tasks with discrete
resource requirements can be addressed using IP,
which is computationally intensive but ofers
precision [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ].
• Game Theory: Models like Nash Equilibrium provide
frameworks for multi-agent systems where tasks
compete for shared resources [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>Learning-Based Algorithms. The rise of machine learn</title>
          <p>
            ing (ML) and deep reinforcement learning (DRL) has enabled
adaptive and predictive resource dispatching:
• Reinforcement Learning (RL): RL models learn
optimal policies by interacting with the environment.
Algorithms like Q-Learning and Proximal Policy
Optimization (PPO) have allocated resources in cloud
computing scenarios [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ].
• Neural Networks: Deep learning models predict
resource demands based on historical workload
patterns, improving dispatching decisions over time
[
            <xref ref-type="bibr" rid="ref7">7</xref>
            ].
• Federated Learning (FL): FL trains models across
decentralized devices while addressing privacy
concerns, requiring careful resource dispatching to
balance computation and communication costs [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
Data science workloads are uniquely challenging due to
their heterogeneity, high computational demands, and often
unpredictable resource needs. Resource dispatching in this
context has focused on:
• Workflow-Aware Scheduling: Platforms like Apache
Airflow and DAG-based systems optimize resource
allocation for multi-stage workflows.
• GPU Optimization: GPU-based workloads like deep
learning training require specialized schedulers to
minimize idle time and maximize utilization [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ].
• Cost Eficiency: Cloud platforms use spot pricing
and preemptible instances to reduce costs. Strategies
must account for potential disruptions and ensure
workload continuity [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ].
• Energy Eficiency: Techniques like Dynamic
Voltage and Frequency Scaling (DVFS) minimize energy
consumption in large-scale data centres [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Fairness in Resource Dispatching</title>
        <p>
          Fairness is an increasingly critical consideration in resource
dispatching, particularly for data science workloads where
multiple users and tasks compete for limited resources. Key
approaches to achieving fairness include:
• Weighted Fair Queuing (WFQ): Assigns weights to
tasks based on priority levels, ensuring proportional
resource allocation [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
• Dominant Resource Fairness (DRF): Proposed by
Ghodsi et al. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], DRF extends traditional fairness
models to multi-resource environments, ensuring
that no single resource type becomes a bottleneck.
• Max-Min Fairness: Ensures that the minimum
allocation among tasks is maximized, balancing fairness
with eficiency [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
• Incentive Mechanisms: Game-theoretic approaches
incentivize users to truthfully report resource
demands, minimizing strategic manipulation [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Discussion</title>
        <p>While significant progress has been made in resource
dispatching, several critical challenges remain. Scalability is
a primary concern, as algorithms must handle increasing
workloads and heterogeneous environments without
introducing significant overhead. Real-time adaptation poses
another challenge, requiring dispatching decisions to
dynamically adjust to changes in workloads, resource
availability, and user demands. Ethical considerations further
complicate resource dispatching, as ensuring fairness in
multi-tenant systems while balancing eficiency and cost
remains a complex issue. Lastly, sustainability has emerged
as a pressing priority, with data centres consuming
increasing energy, necessitating resource dispatching strategies
prioritising green computing initiatives to reduce
environmental impact. Addressing these challenges is essential
to advancing the field and ensuring the efectiveness of
resource-dispatching systems in diverse computing
environments.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Fair Dispatching with Qualitative</title>
    </sec>
    <sec id="sec-4">
      <title>Metrics</title>
      <p>Our dispatching algorithm is built on three key hypotheses.
• Hypothesis-1: The first hypothesis assumes that a
data science pipeline to be executed includes fairness
requirements for each task explicitly specified by
humans (e.g., data scientists, domain experts, data
owners, or communities potentially impacted by the
analysis). Each task within the pipeline requires
multiple resources to execute successfully, and these
fairness requirements guide the resource allocation
process.</p>
      <p>
        A data science pipeline is then guided by a global
fairness objective and an approximation threshold,
which defines the extent to which local and global
fairness requirements can be met. The global
fairness objective is an overarching guideline for
ensuring ethical and equitable outcomes across the
pipeline. At the same time, the approximation
threshold specifies the allowable deviation or
tradeofs in achieving fairness at both local (task-specific)
and global (pipeline-wide) levels. This balance
ensures that fairness is systematically incorporated
while accounting for practical constraints.
• Hypothesis-2: The second hypothesis posits that data
science tasks are executed using a pool of available
resources, each characterized by distinct
quantitative and qualitative properties. These properties can
be validated and certified through a pre-established
certification process, ensuring that the resources
meet the necessary standards for the tasks they
support [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ].
• Hypothesis-3: The third hypothesis assumes that the
provision of computing resources follows
just-intime strategies, utilizing a dynamic pool of resources
managed within virtualized environments. In this
setup, given a set of tasks to be executed, a dispatcher
dynamically allocates resources to tasks based on
their alignment with both technical and qualitative
requirements at both global and local levels. This
approach ensures eficient and adaptive resource
allocation while meeting the pipeline’s broader
fairness and performance objectives.
      </p>
      <p>By integrating these three hypotheses, the algorithm
ensures resource allocation meets technical demands and
adheres to fairness principles, fostering ethical and responsible
data science practices.</p>
      <p>Our dispatching approach consists of 3 steps: 1)
preparation of the execution environment consisting of available
computing resources with the fairness properties that can
potentially be acceptable to the data science pipeline
requirements (Section 3.1); 2) fairness calculation for every
computing resource in the pool (Section 3.2); 3) task
dispatching seeking for resources that are eligible for a given
task select the best resource according to local and global
FI (Section 3.3).</p>
      <sec id="sec-4-1">
        <title>3.1. Preparation of the Execution</title>
      </sec>
      <sec id="sec-4-2">
        <title>Environment</title>
        <p>
          To prepare the execution environment for a given data
science pipeline consisting of tasks with input data estimated
computing resources requirement and a fairness objective,
we elaborate on our work in Trust Negotiation [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The
ifrst is to build an execution environment with a pool of
available resources tagged with qualitative metrics that
totally or partially align to fairness technical and requirements
using the following negotiation algorithm.
        </p>
        <p>
          The trust negotiation algorithm [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] is designed to
dynamically establish and manage trust (defined concerning
fairness metrics) among resources in virtual environment
pools. It evaluates trust values for each resource based on its
profiles, including functional and non-functional attributes
and their alignment with the trust requirements of other
resources and tasks. The algorithm supports partial trust
negotiation, allowing resources with suboptimal trust
levels to participate under restricted conditions. Trust values
are updated continuously as the execution of the data
science pipeline tasks state evolves, ensuring adaptability and
fairness. A centralized trust proxy coordinates the
negotiation process, collecting profiles, enforcing trust policies, and
resolving conflicts to maximize participation while
maintaining data science tasks’ reliability.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>3.2. Fairness Calculation</title>
        <p>
          This dispatching strategy leverages a Fairness Index (FI)
[
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] to allocate tasks to the most suitable computing
resource in a distributed system. FI is computed on the basis
of the following building blocks.
        </p>
        <p>• Computing Resources: Each computing resource
is defined by multiple attributes (e.g., location score,
data provenance, GPU cores) and an initial pool of
available resources.
• Weights: The importance of each factor in the FI
computation is defined in a weights dictionary,
allowing customization based on application needs.
• Tasks have resource requirements (denoted as
resource_needs) that the selected computing
resource must fulfil.</p>
        <p>Function calculate_fi computes the Fairness Index
(FI) for each computing resource by combining several
metrics weighted according to their importance, as follows.
  ( ) = 1 +  2+
 3 +  4  +
 5 +  6  +
 7 +  1 2 +  2
(1)
where:
• Location Score (): Proximity or relevance of the
computing resource’s location to the task.
• Data Provenance Score (): Suitability of the
computing resource’s data origins for the task.
• Sovereignty Score (): Compliance with data
sovereignty requirements.
• Model Performance (  ): Performance of the
models deployed on the computing resource.
• Training Time (): Estimated time required to
train models on the computing resource.
• GPU Cores (  ): Availability of GPU resources.
• Calibration Cycles (): Computing resource’s
capacity to handle calibration demands.
• Carbon Footprint (2 ): Environmental impact
of utilizing the computing resource.
• Economic Cost (): Financial cost associated
with the computing resource’s operation.</p>
        <p>The coeficients  1, . . . ,  7,  1,  2 are the weights
assigned to each metric. They reflect the metrics relative
importance within the fairness index. These weights can be
adjusted based on the data science task’s specific priorities
or fairness objectives.</p>
      </sec>
      <sec id="sec-4-4">
        <title>3.3. Task Dispatching</title>
        <p>The FI guides in equation (1) dispatching by selecting
resources best suited to meet a given task’s technical and
qualitative requirements. The negotiation algorithm allows
one to choose the right resources that can participate in
the execution of a task. In particular, dispatching function
dispatch_task selects the most suitable server for a given
a task based on its FI and available resources using the
following three-steps process.</p>
        <p>1. Iterate Over Computing Resources: For each
computing resource, the FI is calculated using
calculate_fi.
2. Eligibility Check: A computing resource is
considered eligible if:
• It has an FI close to the task requirements</p>
        <p>among available computing resources.
• It has suficient resources to meet the task’s
requirements: available_resources ≥
resource_needs.
3. Select the Best Computing Resource: The
computing resource with the maximum FI that
satisifes the resource constraints and can contribute to
achieving the expected global FI is chosen.</p>
        <sec id="sec-4-4-1">
          <title>Task Allocation</title>
          <p>selected:</p>
          <p>Once a suitable computing resource is
• Resource Deduction: The task’s resource
requirements are deducted from the available computing
resources.
• Task Assignment Notification : The task is
marked as assigned to the computing resource, and
a confirmation message is sent.</p>
          <p>If no suitable computing resource is found, an error is raised.
Example Workflow Let us assume that two servers (“A”
and “B”) have one available computing resource each to
execute a data science pipeline. First, for each resource we
compute its FI, and then match and allocate the tasks in the
pipeline. Considering (0.1, 0.15, 0.2, 0.25, 0.1, 0.05, 0.05, 0.05,
0.05) as weights for metric values (0.8, 0.9, 0.7, 0.95, 0.8, 8,
3, 0.2, 0.5), listed in the order used in equation (1), the FI is
computed as follows:
1. FI Calculation:
• Server A:   = 0.1 · 0.8 + 0.15 · 0.9 + 0.2 ·
0.7 + 0.25 · 0.95 + 0.1 · 0.8 + 0.05 · 8 + 0.05 ·
3 + 0.05 · 0.2 + 0.05 · 0.5.
• Server B: Similar calculation using its
respective attributes.
2. Best Server Selection: Compare FI scores and
resource availability. Assign the task to the server
with the highest eligible FI.
3. Task Allocation: Deduct the task’s resource needs
from the selected server’s resources and confirm the
assignment.</p>
        </sec>
      </sec>
      <sec id="sec-4-5">
        <title>3.4. Fairness-Aware AI Resource</title>
      </sec>
      <sec id="sec-4-6">
        <title>Dispatching: A Case Study in Global</title>
      </sec>
      <sec id="sec-4-7">
        <title>Health Research</title>
        <p>This document presents a fairness-aware resource
dispatching approach for AI-based model training in global health
research. The study focuses on training a tuberculosis (TB)
diagnostic model using X-ray images from hospitals across
the Global South. The goal is to allocate computational
resources while ensuring fairness, sovereignty, and
environmental sustainability.</p>
        <sec id="sec-4-7-1">
          <title>3.4.1. Available Computing Resources</title>
          <p>Table 1 presents the available computing resources,
including qualitative attributes (e.g., sovereignty, energy type) and
quantitative metrics (e.g., training speed, CO2 emissions).</p>
        </sec>
        <sec id="sec-4-7-2">
          <title>3.4.2. Pipeline Fairness Constraints</title>
          <p>The AI training pipeline consists of two tasks: Data
Exploration and Preprocessing, and Model Training. Each task
has FI requirements ensuring respect for sovereignty,
sustainability, and computational eficiency (see Table 2).</p>
        </sec>
        <sec id="sec-4-7-3">
          <title>3.4.3. Fair Dispatching and Negotiation Rounds</title>
          <p>Round 1: Initial Task Allocation Data Exploration:
• Eligible servers: S1 (South Africa, FI = 0.9), S4 (Kenya,</p>
          <p>FI = 0.85).
• Best match: S1 (Solar-powered, high sovereignty,
low CO2 emissions).</p>
          <p>• Initial allocation: S1 OK.</p>
          <p>Model Training:
• Eligible servers: S1, S2, S5.
• Best match: S2 (Brazil, fastest GPU, moderate
sovereignty, moderate CO2 emissions).</p>
          <p>• Initial allocation: S2 OK.</p>
          <p>Round 2: Adjustments for Fairness</p>
          <p>Data Exploration:
• S1 requests workload redistribution due to
underutilization.
• S4 is added as a backup node to balance workload
and redundancy.
• Final allocation: 70% of workload on **S1**, 30%
on **S4**.</p>
          <p>Model Training:
• S2 alone does not meet fairness goals.
• S5 (Argentina) is added to improve fairness in
regional distribution.
• Final allocation: S2 (60%) + S5 (40%).
The final task allocation is shown in Table 3. The table
outlines the optimized distribution of tasks across servers
following a negotiation process. It highlights how tasks such
as Data Exploration and Model Training are allocated to
specific servers, with percentages indicating the workload
distribution. For instance, Data Exploration is split between
Server 1 (70%) and Server 4 (30%), ensuring a balanced
workload and incorporating redundancy for reliability. Similarly,
Model Training is divided between Server 2 (60%) and Server
5 (40%), with adjustments aimed at improving fairness in
the training process. The table reflects a careful
consideration of workload balancing, fairness, and system reliability,
suggesting that the allocation was designed to optimize
resource utilization and prevent overloading any single server.
3.4.5. Impact of Fair Dispatching
• Improved Regional Fairness: Avoids bias by
distributing tasks across multiple Global South regions.
• Energy-Aware Allocation: Prioritizes solar and
wind-powered servers for lower carbon footprint.
• Preserved Data Sovereignty: Ensures data
governance laws are respected in high-sovereignty
regions.
• Optimized Compute Eficiency: Training tasks
leverage high-GPU servers while balancing fairness.</p>
          <p>This case study demonstrates how the Fairness Index
(FI)-based dispatching enables equitable AI model training
for global health research while optimizing environmental
impact, sovereignty, and computational fairness. The
negotiation mechanism ensures balanced allocations, preventing
regional bias and enhancing responsible AI development.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Fair and Responsible Dispatching in Practice</title>
      <p>The experimental setting evaluates the proposed fair
dispatching approach by simulating a representative scenario
with diverse resource pools, fairness metrics, and data
science pipelines. The goal is to demonstrate how the
dispatcher allocates resources to meet the global Fairness Index
(FI) expectations associated with data science pipelines.
Resources Pools. We created three distinct patterns of
resource pools, each with varying capacity and fairness
metrics:
• High-Capacity, Low-Fairness Pool: Large servers with
extensive computational resources (64 GPU cores,
512 GB RAM).</p>
      <p>Fairness Metrics:</p>
      <p>Location Score: Low (0.3)
Sovereignty Score: Low (0.4)
Carbon Footprint: High (0.8)</p>
      <p>Economic Cost: Medium (0.6)
• Balanced-Capacity, Medium-Fairness Pool: Mid-sized
servers with moderate computational resources (32
GPU cores, 256 GB RAM).</p>
      <p>Fairness Metrics:</p>
      <p>Location Score: Medium
Sovereignty Score: Medium
Carbon Footprint: Medium (0.5)
Economic Cost: Low (0.4)
(0.6)
(0.7)
• Low-Capacity, High-Fairness Pool: Small servers with
minimal computational resources (8 GPU cores, 64
GB RAM).</p>
      <p>Fairness Metrics:</p>
      <p>Location Score: High
Sovereignty Score: High
Carbon Footprint: Low (0.2)
Economic Cost: Low (0.3)
(0.9)
(0.85)
Data Science Pipelines. We defined three pipelines with
varying tasks, resource requirements, and global FI
expectations:
• Pipeline A - High Computational Demand.
Prioritizes computational eficiency, cost over fairness,
and higher weights for GPU cores and training time.
– Weights:</p>
      <p>Location Score: 0.1
Data Provenance Score: 0.1
Sovereignty Score: 0.1
Model Performance: 0.3
Training Time: 0.2</p>
      <p>GPU Cores: 0.2
– Tasks:</p>
      <p>Data preprocessing: Requires medium
resources (16 GPU cores, 128 GB RAM).</p>
      <p>Model training: Requires high resources (48</p>
      <p>GPU cores, 256 GB RAM).
– Global FI Expectation: Medium FI ≥ 0.65).
• Pipeline B - Low Computational Demand, High
Fairness. Prioritize fairness metrics like location,
sovereignty, and carbon footprint over
computational eficiency.</p>
      <p>– Weights:</p>
      <p>Location Score: 0.3
Data Provenance Score: 0.2
Sovereignty Score: 0.3
Model Performance: 0.1
Training Time: 0.05</p>
      <p>GPU Cores: 0.05
– Tasks:</p>
      <p>Data cleaning: Requires low resources (8 GPU
cores, 64 GB RAM).</p>
      <p>Model tuning: Requires medium resources
(16 GPU cores, 128 GB RAM).</p>
      <p>– Global FI Expectation: High FI ≥ 0.8.
• Pipeline C - Balanced Computational and Fairness
Requirements.</p>
      <p>– Weights</p>
      <p>Location Score: 0.2
Data Provenance Score: 0.2
Sovereignty Score: 0.2
Model Performance: 0.2
Training Time: 0.1</p>
      <p>GPU Cores: 0.1
– Tasks:</p>
      <p>Feature engineering: Requires medium
resources (16 GPU cores, 128 GB RAM).</p>
      <p>Model training: Requires medium resources
(32 GPU cores, 256 GB RAM).
– Global FI Expectation: Medium-High FI ≥</p>
      <p>0.75.</p>
      <sec id="sec-5-1">
        <title>Experimental Scenario for Evaluating Fair Dispatch</title>
        <p>ing We considered three scenarios to evaluate the fair
dispatching approach under diverse conditions. We describe
the setup, objective, and expected outcome for each scenario.
The Baseline Scenario provides a reference for understanding
default behavior. The Increased Capacity Scenario highlights
the impact of resource abundance on fairness outcomes. The
Dynamic Fairness Adjustment Scenario tests the adaptability
of the algorithm to evolving fairness goals and resource
constraints.</p>
        <p>Baseline Scenario: It aims to evaluate the default behavior
of the fair dispatching algorithm without any adjustments
to resource capacities, fairness weights, or pipeline
requirements.</p>
        <p>Setup: A predefined set of resource pools with varying
capacities and fairness properties (e.g., location, carbon footprint,
economic cost). Data science pipelines with diverse tasks
(data preparation, model training) and fixed fairness
expectations.</p>
        <p>Key Focus: Understand how well the algorithm meets
fairness requirements using the existing resources and
configuration without negotiation or dynamic adjustments.
Expected Outcome: A clear baseline to identify pipelines that
succeed or fail to meet fairness expectations and highlight
areas for improvement.</p>
        <p>Increased Capacity Scenario: It aims to evaluate how
increasing resource availability afects the ability of the
dispatcher to meet fairness requirements.</p>
        <p>Setup: Resource pools have their capacities increased by a
ifxed factor (e.g., 1.5x or 2x) while retaining the same
fairness properties. Data science pipelines remain unchanged
in terms of tasks and fairness expectations.</p>
        <p>Key Focus: Examine whether additional resource availability
reduces negotiation calls, improves global FI scores, or leads
to better task allocation outcomes.</p>
        <p>Expected Outcome: Insights into the impact of resource
abundance on fairness and eficiency in dispatching,
demonstrating the scalability of the approach.</p>
        <p>Dynamic Fairness Adjustment Scenario: It aims to
evaluate the ability of the dispatching algorithm to adapt to
scenarios where fairness weights or expectations are
dynamically adjusted.</p>
        <p>Setup: Fairness weights for qualitative metrics (e.g.,
sovereignty, carbon footprint) are modified to prioritize
specific fairness dimensions over others. Data science pipelines
have dynamic FI expectations based on task priority or
external conditions. The dispatcher employs negotiation
strategies to adapt to these changes.</p>
        <p>Key Focus: Evaluate the flexibility of the algorithm to
handle shifting priorities and fairness goals while maintaining
eficient resource allocation.</p>
        <p>Expected Outcome: Demonstration of the adaptability of the
approach, with insights into how fairness trade-ofs afect
allocation outcomes.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Dispatching Process</title>
        <p>cess as follows.</p>
        <p>We implemented a dispatching
pro1. Input: Resource pools, pipelines, and associated
weights.
2. Task Allocation: Calculate the FI for each resource
for every task based on pipeline-specific weights.
Select the resource with the highest FI that satisfies
the task’s resource requirements.
3. Global FI Validation: After all tasks are dispatched,
compute the overall pipeline FI as the weighted
average of the allocated resources’ FIs. Ensure the global
FI meets the pipeline’s expectations.</p>
        <p>Figure 1 shows our results. The plot represents the Global
FI (Fairness Index) scores of three pipelines (Pipeline_A,
Pipeline_B, and Pipeline_C) under diferent scenarios. The
height of the bars represents the global FI scores achieved
for each pipeline, and annotations indicate the number of
negotiation calls required during the resource allocation
process.</p>
        <p>The dashed line indicates the threshold FI value, a
benchmark for evaluating whether the pipelines meet fairness
expectations. Its example value, 0.75, means that a pipeline
must achieve a global FI score of at least 0.75 for the
resources allocated to its tasks to be considered fair according
to the predefined criteria. Pipelines above the threshold
have a global FI score greater than or equal to 0.75. This
indicates that the resource allocation meets or exceeds
fairness requirements. In the plot, these bars are coloured green.
Pipelines below the threshold have a global FI score of less
than 0.75. This indicates that the allocation of resources
did not meet the fairness criteria.</p>
        <p>The threshold serves as a critical benchmark for
evaluating pipeline performance against fairness expectations. It
enables comparative analysis by distinguishing pipelines
that meet fairness criteria from those that fall short. For
pipelines below the threshold, it provides guidance for
improvement, highlighting the need for adjustments such as
e
r
o
c
S
I
F
l
a
b
o
l
G</p>
        <p>Baseline Scenario</p>
        <p>Increased Capacity Scenario
High7e.r8Fair7n.e8ss P7ri.o8rity Scenario
4.22
4.3
changing fairness metric weights, increasing resource
availability, or using negotiation processes to adjust task
requirements. In addition, the threshold is highly adjustable,
allowing it to be tailored to the specific fairness priorities of the
system. For example, a higher threshold imposes stricter
fairness requirements, ensuring more equitable resource
allocation, while a lower threshold relaxes these
requirements, increasing the likelihood that pipelines will meet
expectations. This flexibility makes the threshold a versatile
tool for evaluating and improving the fairness of resource
allocation strategies.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Interpretation of Initial Experiments. In the Baseline</title>
        <p>Scenario (see Figure 1) Pipeline_A, achieves the highest
FI score (∼ 7, 8), well above the threshold, with no need
for negotiation. Pipeline_B and Pipeline_C have lower FI
scores (∼ 2, 42 and ∼ 4, 22, respectively) and require two
negotiation calls each. Thus, they did not meet the
fairness expectations. There was no significant diference in
FI scores for the increased capacity scenario compared to
the baseline scenario, indicating that increasing resource
capacity did not directly influence the allocation or fairness
outcomes. Negotiation calls remain the same, implying that
the adjustments in this scenario did not alleviate the need
for negotiations in resource allocation. For the higher
fairness priority case, FI scores for Pipeline_B and Pipeline_C
improve slightly compared to the baseline (e.g., Pipeline_B’s
score increases from 2, 42 to 2, 60). The number of
negotiation calls remains constant, but the adjustments in fairness
weights reflect a positive impact on pipelines with lower
FI scores. Pipeline_A remains unafected due to its already
high FI score. For the reduced expectations scenario, FI
scores and negotiation calls remain unchanged compared to
the baseline. This scenario indicates that lowering fairness
expectations (e.g., reducing FI thresholds) does not impact
the allocation process but would allow more pipelines to
"pass" the evaluation if the threshold is considered.</p>
        <p>Pipeline_A consistently performs well, regardless of the
scenario, suggesting it aligns better with the resource pool or
has fewer resource constraints. Pipeline_B and Pipeline_C
struggle to meet fairness expectations across all scenarios,
with relatively low FI scores and the need for negotiation
to adjust resource allocations. Higher fairness priority
improves fairness for pipelines with lower FI scores, making it
the most promising scenario for addressing disparities. The
increased capacity and reduced expectations scenarios do
not significantly change the allocation outcomes,
highlighting that resource availability or relaxed thresholds alone are
insuficient to improve fairness outcomes.</p>
        <p>Advantages
• Customizable Fairness: Weights allow
prioritization of sustainability, cost, or performance.
• Dynamic Allocation: The strategy adapts to server
attributes or task requirements changes.
• Fair Resource Utilization: Ensures resource
allocation considers technical and qualitative factors.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Limitations</title>
        <p>• Complex Weight Tuning: Achieving an optimal
balance among factors requires careful weight
conifguration.
• Scalability: Performance may degrade with many
servers and tasks due to computational overhead in
FI calculations.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions and Future Work</title>
      <p>This paper presents a pioneering approach to fair and
responsible resource dispatching for data science pipelines
by incorporating technical and qualitative metrics into the
allocation process. Integrating fairness metrics provides
a foundation for equitable computational resource
management, aligning with technofeminism and ecofeminism
principles. The proposed dispatching mechanism shows
potential for balancing computational eficiency with social
and environmental fairness.</p>
      <p>Fair resource dispatching, guided by qualitative fairness
metrics, aligns deeply with the principles of
technofeminism and ecofeminism by challenging systemic inequities
and promoting inclusivity in allocating computational and
environmental resources. Technofeminism, which seeks
to dismantle the gendered biases embedded in
technological systems, benefits from fairness-driven resource
allocation by ensuring marginalized voices —often excluded
from decision-making processes —have equitable access to
technological infrastructure. Qualitative fairness metrics
provide a framework for identifying and correcting
historical imbalances, such as privileging Global North projects
over Global South initiatives or reinforcing patriarchal
priorities.</p>
      <p>Ecofeminism, focusing on the interconnectedness of
environmental justice and gender equality, similarly intersects
with fair dispatching practices. Prioritizing energy-eficient,
sustainable resource management and fair dispatching
supports ecofeminism’s aim to mitigate environmental harm
caused by unchecked technological expansion. Together,
these frameworks foster a redistribution of resources that
values diverse perspectives, reduces systemic harm, and
integrates sustainability with social justice in technology.
Open issues and Future work. While the results
demonstrate the feasibility and relevance of the approach, it
represents an initial step toward a broader vision of fair resource
dispatching. Several directions for future work emerge from
this study:
• Scalability and Realism: Experimentation with
largescale and more realistic resource pools and pipelines,
including heterogeneous and dynamic resource
environments. Deployment in real-world settings such
as federated learning systems or global data science
collaborations.
• Dynamic and Adaptive Weights: Development of
algorithms that dynamically adjust fairness weights
based on pipelines or systems’ evolving priorities
and constraints.
• Stability feature of resource allocation: Consider this
feature as constant reallocation in response to minor
changes can lead to ineficiencies; thus,
incorporating stability mechanisms in future solutions would
help avoid unnecessary reallocations, contributing
to overall cost reduction eforts.
• Inclusion of feedback loops to learn from past
allocations and refine the weight configuration over
time.
• Negotiation Strategies: Advanced negotiation
algorithms for handling resource shortages or conflicts
while maintaining fairness. Integration of predictive
analytics to proactively anticipate negotiation needs
and optimize the resource allocation process.
• Cross-Domain Applications: Extension of the
framework to interdisciplinary domains such as climate
modelling, medical research, and global
development projects, where fairness and resource
optimization are critical.
• Enhanced Qualitative Metrics: Expansion of the
fairness index to include new dimensions such as
cultural representation, gender inclusivity, and
accessibility. Use of machine learning models to quantify
qualitative metrics more accurately.
• Transparency: Development of visualization tools
for stakeholders to understand and monitor the
allocation process and its fairness outcomes.
• Governance: Design of governance mechanisms to
ensure accountability in fairness-driven dispatching
decisions.</p>
      <p>Addressing these challenges can evolve the proposed
framework into a robust, scalable, and adaptable system for fair
resource dispatching. Future experiments should involve
diverse datasets and scenarios to validate the approach under
varying conditions and demonstrate its utility for advancing
ethical and responsible computational practices.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Acknowledgements</title>
      <p>The work reported in this paper is performed in the
context of the project FRIENDLY 2 funded by the inter-group
program of the laboratory LIRIS, Lyon.
2http://vargas-solar.com/friendly/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wajcman</surname>
          </string-name>
          ,
          <article-title>Technocapitalism meets technofeminism: women and technology in a wireless world, Labour &amp; Industry: a journal of the social and economic relations of work 16 (</article-title>
          <year>2006</year>
          )
          <fpage>7</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>d'Eaubonne, Feminism or Death: How the Women's Movement Can Save the Planet</article-title>
          , Verso Books,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bertsimas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Tsitsiklis</surname>
          </string-name>
          , Introduction to Linear Optimization, Athena Scientific,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          , Éva Tardos, Algorithm Design, Pearson,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Roughgarden</surname>
          </string-name>
          ,
          <article-title>Selfish Routing and the Price of Anarchy</article-title>
          , MIT Press, Cambridge, MA,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alizadeh</surname>
          </string-name>
          , I. Menache,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kandula</surname>
          </string-name>
          ,
          <article-title>Resource management with deep reinforcement learning</article-title>
          ,
          <source>in: Proceedings of the 15th ACM Workshop on Hot Topics in Networks (HotNets)</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>Eficient resource allocation for cloud-based deep learning platforms</article-title>
          ,
          <source>IEEE Transactions on Cloud Computing</source>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>P. K.</surname>
          </string-name>
          et al.,
          <article-title>Advances and open problems in federated learning</article-title>
          ,
          <source>Foundations and Trends in Machine Learning</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kaushik</surname>
          </string-name>
          ,
          <article-title>A survey on resource allocation techniques in cloud computing for gpu-based workloads</article-title>
          ,
          <source>Journal of Cloud Computing</source>
          <volume>10</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L. Liu,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pu</surname>
          </string-name>
          ,
          <article-title>Spot pricing for cloud workloads: Trends, challenges, and opportunities</article-title>
          ,
          <source>IEEE Transactions on Cloud Computing</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beloglazov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Abawajy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>Energy-aware resource allocation heuristics for eficient management of data centers for cloud computing</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>28</volume>
          (
          <year>2012</year>
          )
          <fpage>755</fpage>
          -
          <lpage>768</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>You</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>A weighted-fair-queuing (wfq)-based dynamic request scheduling approach in a multi-core system</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>28</volume>
          (
          <year>2012</year>
          )
          <fpage>1110</fpage>
          -
          <lpage>1120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghodsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaharia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hindman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Konwinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shenker</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Stoica</surname>
          </string-name>
          ,
          <article-title>Dominant resource fairness: Fair allocation of multiple resource types</article-title>
          ,
          <source>Proceedings of the 8th USENIX Symposium on Networked Systems Design and Implementation</source>
          (NSDI) (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Radunovic</surname>
          </string-name>
          , J.
          <string-name>
            <surname>-Y. Le Boudec</surname>
          </string-name>
          ,
          <article-title>A unified framework for max-min and min-max fairness with applications</article-title>
          ,
          <source>IEEE/ACM Transactions on networking 15</source>
          (
          <year>2007</year>
          )
          <fpage>1073</fpage>
          -
          <lpage>1083</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kannan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <article-title>Incentivizing fair resource allocation in distributed systems</article-title>
          ,
          <source>Proceedings of the 21st ACM Conference on Economics and Computation (EC)</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Ardagna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bena</surname>
          </string-name>
          ,
          <article-title>Non-functional certification of modern distributed systems: A research manifesto</article-title>
          ,
          <source>in: Proc. of IEEE SSE</source>
          <year>2023</year>
          , Chicago, IL, USA,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Ardagna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bennani</surname>
          </string-name>
          , C. GhediraGuegan, N. Grecchi,
          <string-name>
            <given-names>G.</given-names>
            <surname>Vargas-Solar</surname>
          </string-name>
          ,
          <article-title>Revisiting trust management in the data economy: A road map</article-title>
          ,
          <source>IEEE Internet Computing</source>
          <volume>28</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Vargas-Solar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bennani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Grecchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ghedira-Guegan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Ardagna</surname>
          </string-name>
          ,
          <article-title>Trust negotiation in dynamic service-based applications</article-title>
          , Preprint submitted to Elsevier (
          <year>2025</year>
          ). URL: https://github.com/SESARLab/ trust-negotiation
          <article-title>-dynamic-service-based-applications.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vargas-Solar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bennani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Espinosa-Oviedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mauri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-L.</given-names>
            <surname>Zechinelli-Martini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Catania</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Ardagna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bena</surname>
          </string-name>
          ,
          <article-title>Decolonizing federated learning: Designing fair and responsible resource allocation</article-title>
          ,
          <source>in: Proceedings of the IEEE/ACS International Conference on Computer Systems and Applications (AICCSA)</source>
          , IEEE,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>