<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>regulatory and technological parameters⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Valerii Sokurenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victoria Vysotska</string-name>
          <email>victoria.a.vysotska@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniil Shmatkov</string-name>
          <email>shmatkov.daniil@univd.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yurii Onishchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Pashniev</string-name>
          <email>dvpashniev@univd.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nataliya Vnukova</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daria Davydenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maryna Bulakh</string-name>
          <email>m.bulakh@prz.edu.pl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Internal Affairs</institution>
          ,
          <addr-line>L. Landau avenue, 27, Kharkiv, 61080</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Scientific Center «Hon. Prof. M. S. Bokarius Forensic Science Institute» of the Ministry of Justice of Ukraine</institution>
          ,
          <addr-line>L. Zolochivska st., 8A, Kharkiv, 61177</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Rzeszow University of Technology</institution>
          ,
          <addr-line>Kwiatkowskiego Street 4 37-450 Stalowa Wola</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Scientific and Research Institute of Providing Legal Framework for the Innovative Development, NALS of Ukraine</institution>
          ,
          <addr-line>Chernyshevska St., 80, Kharkiv, 61002</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Simon Kuznets Kharkiv National University of Economics</institution>
          ,
          <addr-line>Nauky avenue, 9A, Kharkiv, 61001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>A method for operationalizing the artificial intelligence' ethical principles through analytical modeling of their regulatory and institutional parameters has been developed. A composite index of public welfare is proposed, integrating digital access indicators, innovation potential, and institutional justice, enabling a quantitative assessment of the ethical guidelines practical implementation degree in AI systems. Methods of multivariate data normalization, scenario modeling, and parametric analysis, as well as elements of hierarchical decision making, are applied to transform abstract normative concepts into computable parameters. The empirical base is formed based on international indices of digital development, innovation, and governance quality for various jurisdictions. The obtained results demonstrate that neither maximum openness nor strict regulatory protection ensures optimal ethical effects. The greatest public welfare is achieved with a balanced combination of regulatory flexibility, institutional quality, and data governance mechanisms, confirming the need for a parametric approach to AI.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;artificial intelligence ethics</kwd>
        <kwd>computational modeling</kwd>
        <kwd>digital governance</kwd>
        <kwd>data-driven regulation</kwd>
        <kwd>algorithmic fairness</kwd>
        <kwd>socio-technical systems</kwd>
        <kwd>composite indices</kwd>
        <kwd>AI governance frameworks</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Related Works</title>
      <p>As AI becomes a common tool for creating and managing information, it has brought attention to
ethical and social issues that were less visible before. AI improves access to information and offers
strong analytical tools, but it also creates concerns about fairness, responsibility, and how much
automation is acceptable. In addition, all of this is influenced by different national regulations and
the self-regulation practices of digital platforms. These issues are becoming increasingly important
as data-driven methods spread.</p>
      <p>AI is not inherently positive or harmful; it mainly amplifies cultural and economic patterns that
already exist [1]. Many recent studies now put ethical issues at the center of discussions about AI
governance. Yet scholars consistently identify several structural gaps.</p>
      <p>First, nearly all ethical guidelines repeat similar values [2; 3], while remaining largely
declarative unless they are embedded within real mechanisms of accountability and law [ 4; 5]. At
the same time, users rarely evaluate technologies through formal normative categories [6], which</p>
      <p>0000-0001-8923-5639 (V. Sokurenko); 0000-0001-6417-3689 (V. Vysotska); 0000-0003-2952-4070 (D. Shmatkov);
00000002-7755-3071 (Y. Onishchenko); 0000-0001-8693-3802 (D. Pashniev); 0000-0002-1354-4838 (N. Vnukova);
0000-00019124-9511 (D. Davydenko); 0000-0003-4264-2303 (M. Bulakh)
means that ethical assessment cannot be reduced to internal indicators alone; taken in isolation,
such metrics do not guarantee any improvement in societal welfare [7].</p>
      <p>Second, most AI-ethics documents are drafted by industrial and academic actors without
meaningful involvement of independent regulators [8]. In practice, AI systems may even reinforce
existing inequalities: when the volume of data expands without being redistributed, the broader
system gradually loses openness and fairness [9; 10].</p>
      <p>In reality, new technologies don’t necessarily make society better unless the conditions around
them also improve [11]. Scientific work today needs to focus on how digital ethics should be
shaped and organized [12]. AI should be understood as a function of its specific social environment
rather than as an abstract, universal entity [13]. But how, then, can we measure the actual
integration of ethical concerns into AI development?</p>
      <p>AI is not a “carrier” of ethics or a subject of rights; failures emerge not from the technology
itself but from the contexts in which it is deployed [13]. Today, the dominant shift is toward
building socio-technical systems within which algorithms operate [14], and such systems require
rigorous justification and continual development.</p>
      <p>In practice, these dynamics show that ethical questions cannot be examined in isolation from
the broader conditions in which AI systems emerge. What may first look like a clear set of ethical
rules turns out, on closer examination, to be a mix of legal, economic, and technical factors shaped
by data access and unequal technological resources. Good intentions alone rarely change anything
if the system does not support them. The final outcomes depend on how rules, institutions, and
practical conditions of using AI work together.</p>
      <p>In this sense, measuring “AI ethics” becomes an analytical challenge in its own right. It requires
identifying the values that are formally declared as well as assessing how these values are
implemented in practice, whether they are supported by enforceable mechanisms, and how they
affect public welfare in environments marked by unequal access to data, computational resources,
and regulatory authority. Recent research shows that the ethical quality of AI depends on how
declared principles interact with real social and technical conditions.</p>
      <p>The task of this study is to develop a methodology for conceptualizing and operationalizing the
ethical parameters of AI through analytical models that make it possible to evaluate the degree to
which ethical principles are integrated into AI development at the level of systems and digital
platforms.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>This study uses a combination of normative analysis and quantitative modelling to examine how
different regulatory settings influence the societal welfare produced by innovation. The central idea
is that the ethical value of technological progress depends on how different legal approaches
interact and on the overall fairness and quality of governance. To operationalize this concept, we
constructed Societal Welfare Index (Wsoc) as an integrated composite indicator covering three
functional dimensions: Access (A), Innovation (I), and Fairness (F).</p>
      <p>The first level of the study focuses on three jurisdictions representing distinct regulatory
paradigms of AI and knowledge governance:</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) a case-law-driven system with flexible fair use principles and high reliance on judicial
interpretation of text and data mining (TDM) (USA);
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) a rule-based system with dual TDM exceptions and explicit opt-out mechanisms for right
holders (EU);
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) a radical model combining a narrow scientific TDM exception with the legal recognition of
AI-generated outputs as protectable results (Ukraine).
      </p>
      <p>These regimes were selected because they represent a wide range of legal approaches, from
more open to more restrictive, which makes it possible to compare how different designs affect the
ethics of innovation.</p>
      <p>Empirical data were compiled from internationally recognized datasets published between 2022
and 2025 to ensure both temporal consistency and cross-country comparability. Each Wsoc
component aggregates several publicly available indices:


</p>
      <p>Access (A): Open Data Maturity Index [15], OECD Digital Government Index [16], Freedom
on the Net [17], AI Readiness Index [18].</p>
      <p>Innovation (I): Global Innovation Index [19], World Digital Competitiveness Ranking [20],
R&amp;D Expenditure as a percentage of GDP [21–23], and Business R&amp;D in ICT [24].
Fairness (F): Rule of Law Index [25], Corruption Perceptions Index [26], Human
Development Index [27], and Gini Index [28].</p>
      <p>Not all indicators were available for each jurisdiction. For example, some digital competitiveness
and OECD indices omit non-member states. To maintain comparability without arbitrary
interpolation, subset normalization was applied: each country’s sub-index (A, I, or F) was calculated
as the arithmetic mean of all available metrics within that dimension. This method keeps the
dataset consistent while recognizing that some countries simply have fewer indicators available.</p>
      <p>Missing values were left blank rather than replaced by regional or global averages. This
conservative method avoids inflating the apparent innovation potential in data-scarce
environments. For the Gini coefficient, values from different years (2020–2023) were used without
temporal correction, as short-term inequality shifts are statistically minor relative to cross-country
differences.</p>
      <p>All indicators were transformed to a common 0–1 scale using min–max normalization across
the three jurisdictions for each variable:</p>
      <p>X ' =( X − X min)/( X max− X min)</p>
      <p>Data where X is the raw indicator and X' its normalized equivalent. For indices where lower
values represent a more favorable outcome, such as corruption or Gini, the direction was inverted
before normalization.</p>
      <p>
        After normalization, sub-indices were computed as:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>The baseline societal welfare potential (Wsoc) was then represented as a normalized composite
indicator combining the three foundational dimensions (access (A), innovation (I), and fairness (F)).
Given the absence of empirically validated weights or a theoretically dominant dimension, the
model adopts an equal-weight aggregation approach. This approach keeps the model neutral,
avoids arbitrary weighting, and offers a clear baseline for later adjustments.</p>
      <p>To connect empirical data with normative evaluation, two regulatory parameters were
introduced:
τ – freedom of text and data mining (TDM), representing openness of research ecosystems;
ρ – rigidity of proprietary rights for AI-generated results, reflecting the degree of legal exclusivity.</p>
      <p>These parameters were assigned through expert calibration based on comparative analysis of
copyright exceptions, AI governance frameworks, and judicial interpretation. The initial
legaladjusted model integrates both parameters:</p>
      <p>The coefficient (1 − 0.5ρ) introduces a moderate sensitivity of fairness to proprietary rigidity. A
fully restrictive regime (ρ = 1) reduces the fairness component by half rather than eliminating it,
reflecting that even strong intellectual property systems retain some public oversight. Division by
three ensures equal weighting of the three Wsoc dimensions so that the index measures overall
societal balance rather than dominance of any single factor.</p>
      <p>To reflect the ethical dependency between openness and institutional integrity, the Access
dimension was adjusted by Fairness.</p>
      <p>The legal-adjusted Wsoc represents a version of the index in which legal parameters, such as
TDM permissions and intellectual property rights, are incorporated without linking them to
institutional performance. In this version, the model uses legal rules as they are written and does
not consider how well they work in practice. It shows a situation where the legal framework is
accepted as given, without checking whether it is actually enforced or produces real effects. It
therefore reflects a scenario where the legal environment is taken at face value, without assessing
whether legal rules translate into real enforcement or societal outcomes.</p>
      <p>The legal-adjusted value of Wsoc is calculated as:</p>
      <p>
        W lseogcal= ( A⋅τ + I + F (1−0.5 ρ))
3
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
      </p>
      <p>The institutionally-adjusted Wsoc integrates both legal parameters and the institutional
environment. This version captures the realized effectiveness of legal norms by incorporating the
quality of institutions that mediate their impact. The institutional context determines whether
formal rights, such as TDM exceptions or intellectual property protections, translate into actual
incentives for innovation or remain only declarative.</p>
      <p>Accordingly, the institutionally-adjusted Wsoc reflects a configuration in which the effect of
legal rules on access, innovation, and fairness depends on institutional performance. This allows
the index to approximate the marginal contribution of legal frameworks under real-world
institutional constraints.</p>
      <p>The institutionally-adjusted value is calculated as:</p>
      <p>W isnosct= ( A⋅τ⋅F + I + F (1−0.5 ρ)) (4)</p>
      <p>3</p>
      <p>Put simply, TDM openness helps only when the institutional framework is strong enough to
support it.</p>
      <p>Given the small sample size of jurisdictions, statistical outlier detection was not applicable. The
relative ranking of models remained consistent, confirming that cross-jurisdictional differences are
driven by institutional and regulatory structures rather than arbitrary parameterization.
3. Results
To enable a comparative assessment of structural differences across countries, the following tables
present key indicators of access, innovation capacity, and institutional fairness for the United
States, the EU average, and Ukraine (Tables 1–3).
US
EU</p>
      <p>To assess internal consistency and robustness, two non-parametric validation tests were
conducted. Concordance of rankings across indicators within each dimension was measured using
Kendall’s W, yielding 0.71 for Access, 0.92 for Innovation, and 0.28 for Fairness. These results
indicate strong alignment of innovation metrics and moderate coherence for digital access, while
fairness indicators remain heterogeneous. Robustness was further evaluated through a
leave-oneout procedure: recalculating the overall Wsoc after sequentially excluding each indicator produced
a maximum deviation of 0.07–0.09 across jurisdictions, confirming that no single metric
disproportionately influenced the final results. In subsequent stages of modeling, the legal and
licensing variables (τ, ρ, λ) were introduced precisely in the Access and Fairness dimensions, where
lower coherence revealed structural and normative asymmetries requiring theoretical adjustment.</p>
      <p>To illustrate how different legal traditions shape the ethical-institutional performance of data
and AI governance, the table below compares three jurisdictional model types across the core
parameters and the resulting aggregated welfare scores (Table 4).</p>
      <p>Overall, the data point to a clear pattern shaped mainly by institutional and economic factors :
jurisdictions with higher innovation capacity and lower regulatory rigidity generate substantially
greater levels of societal welfare. Introducing legal parameters lowers the Wsoc values because
formal rules create certain limitations. Adding the institutional adjustment reduces them even
more, as it reflects how data and AI governance actually works in practice. Overall, the Wsoc model
confirms that institutional quality and legal flexibility are decisive factors shaping a jurisdiction’s
ability to produce societal value in the digital economy.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Discussion</title>
      <sec id="sec-3-1">
        <title>4.1. Structural Findings and Cross-Parameter Dynamics</title>
        <p>Although many international indices can be used to assess innovation, access, and fairness, the aim
of this study is not to highlight any specific dataset. The goal is to show the usefulness of modeling
based on regulatory parameters. Regardless of which indicators are chosen, the structural
relationships revealed by the Wsoc framework remain consistent and theoretically meaningful.</p>
        <p>The numbers show that the parameters influence each other in ways that aren’t always
straightforward: even small shifts in rigidity or licensing influence generate disproportional
changes in the fairness-adjusted welfare scores, revealing structural sensitivities that would remain
hidden without formal modeling.</p>
        <p>The results highlight several insights. First, innovation outcomes emerge from the interaction
between access and institutions: freedom of TDM alone does not generate societal benefit unless
supported by transparent, fair, and stable institutional conditions that enable equitable
participation in innovation processes.</p>
        <p>Second, regulatory models embody different ethical logics of openness and control – flexible
interpretative systems facilitate experimentation, codified rule-based regimes enhance
predictability while constraining creativity, and hybrid or sui generis frameworks extend
proprietary boundaries in ways that redefine the moral circulation of knowledge. The models
confirms that legal overprotection reduces experimentation and limits the social diffusion of
knowledge [29].</p>
        <p>Third, institutional strength essentially determines whether the rules work as intended: the gap
between legal potential and realized welfare reflects the limiting effect of corruption, inequality, or
low administrative capacity on the ethical materialization of innovation. Even strong legal or
ethical frameworks fail to increase welfare when not backed by real institutions of accountability
[30].</p>
        <p>Fourth, societal welfare depends on the equilibrium among access, innovation mechanisms, and
fair institutions; only when these components align does technological progress translate into
collective ethical value.</p>
        <p>Finally, the link between access and fairness plays an important ethical role: when governance
is strong, even moderate openness brings real benefits, but when governance is weak, more open
rules lose much of their value.</p>
        <p>Building on these insights, the analysis then turns to the redistributive dimension of AI
ecosystems. Societal welfare is shaped by law and institutions and also by the contractual
allocation of rights within platform-based environments.</p>
      </sec>
      <sec id="sec-3-2">
        <title>4.2. Scenario Modeling Under Platform-Mediated Intellectual Property Regimes</title>
        <p>Today we observe the growing importance of contractual governance, which increasingly
functions as a substitute for statutory law [31]. Technological innovation is outpacing the law,
transforming ownership and value into fluid, contract-based constructs [32]. Recent developments
in AI governance show that licensing agreements increasingly substitute or neutralize the effect of
national copyright law. Across major generative AI platforms such as OpenAI, Midjourney, and
Runway, the terms of service often provide the platform with broad rights to use, modify, or
commercially exploit user-generated outputs, irrespective of whether national law treats such
results as protectable works. As a result, platform licensing operates like a separate layer of rules
which can effectively supersede or diminish the statutory rules on authorship and ownership. In
the contemporary digital environment, proprietary control is often expanded beyond the limits of
traditional law [33; 34], in particular, through contractual and technical mechanisms.</p>
        <p>This situation creates ethical and structural imbalances. On one hand, global licensing rules
make access more uniform across countries. On the other hand, they shift control from individual
creators to platform operators. As a result, platforms start to function like separate regulators that
decide how rights and benefits are distributed.</p>
        <p>To represent this interaction in the model, licensing is treated as a factor modifying the effective
rigidity of property rights. Licensing frameworks redistribute control: they determine how strongly
proprietary structures dominate over public-interest principles. The corresponding parameter λ
(licensing influence) interacts with the baseline rigidity coefficient ρ to produce an adjusted value
of rights rigidity:</p>
        <p>ρeff = ρ + α · s · λ
where s (sign parameter) ∈ {−1, +1} and α (sensitivity coefficient) ≈ 0.5.</p>
        <p>The value of α was set at 0.5 to represent a balanced sensitivity level, ensuring that institutional
quality influences, but does not dominate, the legal components of the model; this midpoint allows
the adjustment to reflect structural differences without overpowering the underlying legal
parameters.</p>
        <p>Accordingly, fairness becomes:</p>
        <p>F ' = F ·(1−0.5 · ρeff )
And the platform-adjusted model of societal welfare is defined as:</p>
        <p>W spolact= ( A · τ · F + I + F ·(1−0.5 · ρeff )) (7)</p>
        <p>3</p>
        <p>For this part of the analysis, we consider abstract models grounded in well-known philosophical
approaches to intellectual property [35–42] (Table 5).
proprietary, rights-based radicalism</p>
        <p>Because the variables presented in Table 5 are derived from abstract conceptual approaches
rather than empirical statistical data, scenario-based forecasting is the most appropriate method for
this type of modeling.</p>
        <p>The scenario-based framework makes it possible to evaluate how different configurations of
TDM freedom (τ), rights rigidity (ρ), licensing influence (λ), and directional effect (s) shape societal
welfare. Once the criteria were ranked and weighted, each scenario could be compared within a
unified parameter space. This enables an assessment of how far each model deviates from an
ethically balanced configuration of openness, innovation capacity, and fairness.</p>
        <p>To operationalize these theoretical parameters, Decision Making Helper program [38; 39] was
applied once to transform abstract variables into structured pairwise comparisons. The analysis
revealed that the influential factor differentiating the scenarios is the effective rigidity coefficient
ρ_eff, which incorporates the corrective effect of licensing. Because ρ_eff depends on formal rights
as well as on the sign of regulatory direction (s), two scenarios with identical nominal rigidity (ρ)
may produce substantially different fairness scores (F′). This is particularly evident where licensing
either counteracts or reinforces proprietary control.</p>
        <p>Scenarios grounded in openness, such as Epistemic Openness and Regulated Utilitarianism,
initially show strong potential due to high τ and low ρ. However, their resulting welfare values
decrease once the negative directional parameter (s = −1) is applied, which lowers adjusted fairness.
In practical terms, this means that legal openness alone does not guarantee higher societal welfare
if the licensing environment introduces uncertainty or weakens the effectiveness of governance
mechanisms. These models therefore lose part of their theoretical advantage when fairness is
corrected through ρ_eff.</p>
        <p>Scenarios characterized by high rigidity (Data Sovereignty and AI Proprietarianism) display the
lowest welfare values. Their elevated ρ and λ, combined with a positive directional effect (s = +1),
sharply increase ρ_eff, resulting in significant reductions ofF′. Although such models may enhance
control over data in the short term, their structural configuration leads to a substantial decline in
adjusted welfare. The drop in F′ across these scenarios illustrates how intensified rights rigidity
disproportionately suppresses overall societal benefit.</p>
        <p>The scenario that produced the most balanced and favorable outcome is Economic Incentivism.
Its medium τ, moderate ρ, and relatively high λ, together with a positive directional parameter,
create a configuration in which proprietary rigidity is partially offset by licensing redistribution,
while openness remains sufficient to support innovation. As a result, this scenario outperforms all
others. This finding demonstrates that neither maximal openness nor maximal control is optimal;
instead, societal welfare is highest under a moderate equilibrium between them.</p>
        <p>The tabular and 3D visualization further (Figure 1) confirms the central position of Economic
Incentivism within the parameter landscape.</p>
        <p>The scenario tests show that ethical and legal factors affect each other in complex ways. Small
increases in rigidity or licensing influence may produce disproportionately large shifts in welfare
outcomes once certain parameter boundaries are crossed. This sensitivity highlights the
importance of maintaining balanced regulatory ecosystems in which institutional quality, access
conditions, and proprietary rules remain aligned. It also demonstrates that ethical trade-offs are
embedded within structural design choices: shifts toward either extreme (high-control or
highopenness) destabilize overall welfare more rapidly than incremental changes in moderate
configurations.</p>
        <p>The use of this software tool represents a scientifically grounded approach, as it applies the
analytic hierarchy process developed by Thomas Saaty and relies on systematic pairwise
comparison of criteria [38; 39]. Its computational procedure evaluates weights, checks internal
consistency, and aggregates heterogeneous indicators into a coherent decision model, ensuring that
the scenario outcomes are derived from a transparent and reproducible mathematical method. This
transforms abstract parameters into rigorously processed analytical results, reinforcing the
reliability of the modelling framework.</p>
        <p>The strongest scenario is the one that stays in a balanced middle zone, where no single
parameter dominates and none of the indicators reach extreme values. This geometric stability
across dimensions explains why the scenario maintains its lead even when fairness and rigidity
parameters are adjusted. It also shows that welfare decreases non-linearly when models move
toward maximal restriction or maximal openness, underscoring the value of a balanced regulatory
architecture.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusions</title>
      <p>The study shows that ethical evaluation becomes more informative when ethical categories are
approached as adjustable parameters that can be combined and tested in different ways. By using
relevant contemporary parameters we showed how different ethical configurations behave under
analytical conditions. This approach illustrates that ethical principles can be explored dynamically.</p>
      <p>Through systematic variation of the parameters, the study highlighted the complexity of ethical
integration in AI development. Some configurations that appear normatively attractive in theory
became less effective once fairness adjustments were applied; others performed better only when
licensing counterbalanced rigidity. These findings indicate that the ethical behavior of AI systems
is shaped by how various legal and ethical factors interact within a broader socio-technical context.
The model thus reveals patterns that would remain hidden without parameter-level
experimentation.</p>
      <p>The scenario analysis indicates that moderately balanced approaches deliver the most stable
ethical results. The exploration of contrasting scenarios, ranging from high-openness models to
highly proprietary ones, allowed us to observe how societal welfare reacts when ethical parameters
are pushed to their limits. The fact that the strongest result emerged from a scenario positioned
between extremes shows that ethical AI governance is an exercise in calibration.</p>
      <p>A key limitation of this study lies in the heterogeneous and evolving nature of the underlying
data. Many of the indicators used to construct the Access, Innovation, and Fairness dimensions are
updated annually and may shift considerably over short periods of time, which affects longitudinal
stability. In addition, the number of available indicators differs across jurisdictions. As a result,
each jurisdiction is assessed on a slightly different subset of criteria, which introduces structural
asymmetry. The model mitigates this through subset normalization, but it cannot fully compensate
for the uneven availability and granularity of empirical data. Finally, any scenario-based modeling
inevitably abstracts away contextual nuances; therefore, the results should be interpreted as
indicative patterns rather than fixed or exhaustive representations of real-world institutional
dynamics.</p>
      <p>Overall, the study’s central contribution lies in demonstrating that ethical parameters can be
operationalized through analytical modeling and stress-tested through controlled variation. By
experimenting with these variables we created a methodological pathway for evaluating the degree
to which ethical principles are actually embedded in AI development. This approach opens the
door to more empirical, adaptive, and evidence-based research on AI ethics, where ethical concepts
can be measured, compared, and refined through iterative modeling.</p>
      <p>Future research may expand this framework in several directions. First, the parameter set itself
can be refined by incorporating additional ethical and regulatory variables, such as transparency
requirements, model accountability mechanisms, or data provenance standards. Second, the
methodology could be extended to a larger and more diverse set of jurisdictions once more
consistent datasets become available, enabling robust cross-country comparisons. Third,
integrating temporal dynamics (tracking how changes in law, licensing practices, or institutional
conditions modify welfare outcomes over time) would allow the model to function as an early
diagnostic tool for emerging regulatory trends. Finally, empirical validation through case studies,
industry datasets, or real-world regulatory interventions would help determine how accurately the
modeled interactions reflect the practical integration of ethical principles into AI development.</p>
      <p>Beyond its conceptual contribution, the framework can also support practical assessments of
regulatory initiatives, national AI strategies, and the governance models implemented by digital
platforms and eco, offering a structured method for evaluating how ethical principles are
operationalized in real policy environments.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>Generative AI tools (ChatGPT 5.1) were used to assist in verification of statistical information
collected by the authors, language editing, paraphrasing, and stylistic refinement of the manuscript.
All conceptual contributions, data selection, methodological decisions, modelling design,
interpretations, and conclusions were developed entirely by the authors. The authors reviewed and
validated all AI-generated suggestions and takes full responsibility for the content of the final text.
[4] R. Hanna and E. Kazim, “Philosophical foundations for digital ethics and AI ethics: A
dignitarian approach,” AI and Ethics, vol. 1, pp. 405–423, 2021.
[5] D. Corrêa et al., “Worldwide AI ethics: A review of 200 guidelines and recommendations for</p>
      <p>AI governance,” Patterns, vol. 4, Art. no. 100857, 2023.
[6] C. Shao, S. Nah, H. Makady, and J. McNealy, “Understanding user attitudes towards
AIenabled technologies: An integrated model of self-efficacy, TAM, and AI ethics,” International
Journal of Human–Computer Interaction, vol. 41, pp. 3053–3065, 2025.
[7] R. Rodrigues, “Legal and human rights issues of AI: Gaps, challenges and vulnerabilities,”</p>
      <p>Journal of Responsible Technology, vol. 4, Art. no. 100005, 2020.
[8] C. Huang, Z. Zhang, B. Mao, and X. Yao, “An overview of artificial intelligence ethics,” IEEE</p>
      <p>Transactions on Artificial Intelligence, vol. 4, pp. 799–819, 2022.
[9] P. Verdegem, AI for Everyone?: Critical Perspectives. London, UK: University of Westminster</p>
      <p>Press, 2021.
[10] A. Wilk, “Teaching AI, ethics, law and policy,” arXiv preprint, arXiv:1904.12470, 2019.
[11] M. G. Hanna et al., “Ethical and bias considerations in artificial intelligence/machine learning,”</p>
      <p>Modern Pathology, vol. 38, no. 3, Art. no. 100686, 2025.
[12] L. Floridi, “The ethics of artificial intelligence: Exacerbated problems, renewed problems,
unprecedented problems,” SSRN Working Paper, 2024.
[13] A. Kriebitz and C. Lütge, “Artificial intelligence and human rights: A business ethical
assessment,” Business and Human Rights Journal, vol. 5, pp. 84–104, 2020.
[14] L. Bolte and A. Van Wynsberghe, “Sustainable AI and the third wave of AI ethics: A structural
turn,” AI and Ethics, vol. 5, pp. 1733–1742, 2025.
[15] Publications Office of the European Union, Open Data Maturity Report 2024. Luxembourg:</p>
      <p>Publications Office of the European Union, 2024.
[16] OECD, OECD Digital Government Index 2023: Results and Key Findings, OECD Public</p>
      <p>Governance Policy Papers, no. 1. Paris, France: OECD Publishing, 2024.
[17] FFreedom House, Freedom on the Net 2024: Country Reports. Washington, DC, USA: Freedom</p>
      <p>House, 2024.
[18] Oxford Insights, Government AI Readiness Index 2024. Oxford, UK: Oxford Insights, 2024.
[19] World Intellectual Property Organization, Global Innovation Index 2025: Innovation at a</p>
      <p>Crossroads. Geneva, Switzerland: WIPO, 2024.
[20] IMD, World Competitiveness Yearbook 2025: Booklet. Lausanne, Switzerland: Institute for</p>
      <p>Management Development, 2025.
[21] The Global Economy, “Ukraine: Research and development expenditure (% of GDP),” 2024.
[22] World Bank, “Research and development expenditure (% of GDP),” 2024.
[23] Eurostat, “Research and development expenditure, EU-27 (% of GDP),” 2024.
[24] United Nations Conference on Trade and Development, Digital Economy Report 2024: Shaping
an Environmentally Sustainable and Inclusive Digital Future. Geneva, Switzerland: United
Nations, 2024.
[25] World Justice Project, Rule of Law Index 2025. Washington, DC, USA: World Justice Project,
2025.
[26] Transparency International, Corruption Perceptions Index 2024. Berlin, Germany:</p>
      <p>Transparency International, 2024.
[27] United Nations Development Programme, Human Development Report 2023/24. New York,</p>
      <p>NY, USA: UNDP, 2024.
[28] World Bank, “Gini index (World Bank estimate),” 2024.
[29] M. Kop, “AI &amp; intellectual property: Towards an articulated public domain,” Texas Intellectual</p>
      <p>Property Law Journal, vol. 28, p. 297, 2019.
[30] M. Wörsdörfer, “AI ethics and ordoliberalism 2.0: Towards a ‘Digital Bill of Rights’,” AI and</p>
      <p>Ethics, vol. 5, pp. 507–525, 2025.
[31] R. M. Hilty, J. Hoffmann, and S. Scheuerer, “Intellectual property justification for artificial
intelligence,” Max Planck Institute for Innovation and Competition, Research Paper no. 20-02,
2020, SSRN: 3539406.
[32] A. Kowalski and T. Nowak, “Digital asset ownership in the context of virtual reality: Legal and
ethical considerations,” Legal Studies in the Digital Age, vol. 2, pp. 38–47, 2023.
[33] D. Shmatkov, “Copyright issues in digital society: Sports video games,” in Proc. Int. Sci. Pract.</p>
      <p>Conf. ‘Intellectual Systems and Information Technologies’, Odesa, Ukraine, Sept. 13–19, 2021,
pp. 310–316.
[34] D. Shmatkov, “Intellectual property management of industrial software products: The case of
Triol Corp,” in Proc. 2021 IEEE 8th Int. Conf. on Problems of Infocommunications, Science and
Technology (PIC S&amp;T), 2021, pp. 108–112, doi:10.1109/PICST54195.2021.9772237.
[35] W. W. Fisher, “Theories of intellectual property,” in New Essays in the Legal and Political</p>
      <p>Theory of Property, S. Munzer, Ed. Cambridge, UK: Cambridge University Press, 2001.
[36] MA. D. Moore, “Intellectual property: Theory, privilege, and pragmatism,” Canadian Journal of</p>
      <p>Law and Jurisprudence, vol. 16, pp. 191–201, 2003.
[37] V. Vysotska, K. Smelyakov, N. Sharonova, E. Vakulik, O. Filipov, and R. Kotelnykov, “Fast
color images clustering for real-time computer vision and AI system,” CEUR Workshop
Proceedings, vol. 3664, pp. 161–177, 2024.
[38] D. Shmatkov, “Theoretical foundations and consequences of implementing a sui generis right
for non-original AI-generated objects,” Information and Law, no. 3, pp. 34–47, 2025,
doi:10.37750/2616-6798.2025.3(54).340467.
[39] V. Vysotska, K. Smelyakov, S. Osiievskyi, and V. Yartsev, “AI models for automatic objects
classification in satellite images,”CEUR Workshop Proceedings, vol. 3988, pp. 21–34, 2025.
[40] S. Hlibko, N. Vnukova, D. Davydenko, and O. Podrez-Riapolova, “Usage of e-technologies for
development of financial and economic potential of united territorial communities,” in Proc.
Int. Sci.-Pract. Conf. ‘Problems of Infocommunications. Science and Technology’, Kharkiv,
Ukraine, Oct. 5–7, 2021.
[41] V. Vysotska, K. Smelyakov, A. Chupryna, M. Derenskyi, V. Repikhov, and M. Hvozdiev, “AI
assistant for intelligent interaction and route optimization in offshore turbine maintenance
system,” CEUR Workshop Proceedings, vol. 4015, pp. 1–21, 2025.
[42] S. Hlibko, N. Vnukova, D. Davydenko, V. Pyvovarov, and V. Avanesian, “The use of linguistic
methods of text processing for the individualization of the bank’s financial service,” in Proc.
7th Int. Conf. on Computational Linguistics and Intelligent Systems (COLINS-2023), Kharkiv,
Ukraine, Apr. 20–21, 2023, pp. 157–167.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hagerty</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Rubinov</surname>
          </string-name>
          , “
          <article-title>Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence,”arXiv preprint</article-title>
          , arXiv:
          <year>1907</year>
          .07892,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jobin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ienca</surname>
          </string-name>
          , and E. Vayena, “
          <article-title>The global landscape of AI ethics guidelines</article-title>
          ,
          <source>” Nature Machine Intelligence</source>
          , vol.
          <volume>1</volume>
          , pp.
          <fpage>389</fpage>
          -
          <lpage>399</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Radanliev</surname>
          </string-name>
          , “
          <article-title>AI ethics: Integrating transparency, fairness, and</article-title>
          privacy in
          <source>AI development,” Applied Artificial Intelligence</source>
          , vol.
          <volume>39</volume>
          ,
          <string-name>
            <surname>Art</surname>
          </string-name>
          . no.
          <issue>2463722</issue>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>