<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Cornell International Law Journal 52 (2019). URL: https://ww3.lawschool.cornell.edu/research/ILJ/
upload/Shur</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1201/9780429263842</article-id>
      <title-group>
        <article-title>Mathematical model of legal regulation of the spread of information influences in social networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Tkachenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anna Ilyenko</string-name>
          <email>anna.ilienko@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yelyzaveta Meleshko</string-name>
          <email>elismeleshko@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Ulichev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henryk Noga</string-name>
          <email>henryk.noga@up.krakow.pl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Central Ukrainian National Technical University</institution>
          ,
          <addr-line>Universytetskyi Ave., 8, Kropyvnytskyi, 25000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State University "Kyiv Aviation Institute"</institution>
          ,
          <addr-line>Liubomyra Huzara Ave., 1, Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of the National Education Commission</institution>
          ,
          <addr-line>Podchorazych Str., 2, Krakow, 30-084</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>290</volume>
      <fpage>41</fpage>
      <lpage>47</lpage>
      <abstract>
        <p>This paper presents a theoretical mathematical model for the legal regulation of information flows in social networks, aiming to balance freedom of speech with public safety and national security. The model introduces regulation as a variable that afects key societal indicators and enables simulation-based optimization to prevent both excessive censorship and information disorder. It incorporates contextual risk factors such as disinformation levels, user behavior history, geolocation, and event timing to dynamically assess the impact of regulation. Although the model shows promising conceptual potential, it remains primarily theoretical. No empirical data from real social networks have been utilized; all experiments are based on simulated scenarios. This limitation is explicitly acknowledged. Future work should focus on empirical validation using open-source or synthetic datasets and on adapting the model to diferent socio-cultural and legal contexts. The presented framework lays the foundation for practical tools in digital policymaking to support eforts aimed at maintaining a fair and secure information environment.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;social media</kwd>
        <kwd>information influence</kwd>
        <kwd>legal regulation</kwd>
        <kwd>disinformation</kwd>
        <kwd>cybersecurity</kwd>
        <kwd>data protection</kwd>
        <kwd>freedom of speech</kwd>
        <kwd>platform liability</kwd>
        <kwd>international standards</kwd>
        <kwd>information security</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In today’s information society, social media platforms play a crucial role in shaping public opinion,
political processes, and national security [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. On the one hand, they enable the free exchange of ideas,
foster the democratization of the information space, and facilitate the exercise of freedom of speech.
On the other hand, these platforms serve as a medium for spreading harmful content, disinformation,
and propaganda that can compromise public safety and threaten national interests.
      </p>
      <p>This dual nature creates an urgent need for legal mechanisms that strike a balance between protecting
fundamental human rights and safeguarding society against potential threats. To achieve such a balance,
an integrated approach is required that can quantitatively assess the impact of various levels of legal
regulation on social processes. The proposed mathematical model establishes functional dependencies
that describe how indices such as freedom of speech, public safety, and the protection of national
interests change in response to regulatory measures, while also accounting for the influence of harmful
information flows.</p>
      <p>Accordingly, this study aims to develop a mathematical tool for analyzing and optimizing legal
regulation in the digital era—a task of great relevance today, as the speed of information dissemination
demands new approaches to ensuring national security without undermining freedom of expression.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>Current research on mathematical modeling of legal regulation in the context of social networks
reveals that while no comprehensive models have been developed specifically for legal regulation,
there has been considerable work on related topics, particularly in modeling information influence,
misinformation, and social dynamics on digital platforms. Many of these studies employ epidemiological
models, fractional-order systems, or power-law-based graph theory to examine the difusion of news,
rumors, and social interactions. These approaches provide valuable foundations for developing future
models that incorporate regulatory mechanisms, especially those seeking to quantify and simulate the
efects of legal interventions.</p>
      <p>
        For instance, O. S. Ulichev investigates information influence dissemination in social networks within
the framework of information confrontation. His work presents a mathematical model that accounts
for diverse behavioral strategies of network nodes, addressing the gap in existing models that often
overlook individual node behavior [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Joydip Dhar, Ankur Jain, and Vijay K. Gupta (2016) propose a mathematical model that applies
epidemiological techniques to describe the spread of news and rumors. They introduce detection criteria
for rumors and incorporate media awareness as a control strategy to limit rumor dissemination [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Michael Muhlmeyer and Shaurya Agarwal (2021), in their book Information Spread in a Social Media
Age, provide a comprehensive overview of how information influence spreads across social media and
how social networks operate in general [4].</p>
      <p>Y. Nakonechna presents a mathematical model that employs the RnSIR framework to describe
the spread of information concealment, integrating both user behavior and regulatory restrictions.
This model is useful for simulating scenarios involving information manipulation or concealment in
cyberspace [5].</p>
      <p>Khrapov and Stolbova utilize an extended SIR model to analyze comment dynamics on Facebook
posts, ofering analytical insights into public reaction and information virality on social media platforms
[6].</p>
      <p>Expanding these approaches, Meira Shur-Ofry and Gadi Fibich (2019) explore how legal innovations
spread by applying difusion models, suggesting that legal norms propagate similarly to cultural ideas
within a network and are influenced by structural and informational factors [7].</p>
      <p>W. Huleihel and Y. Refael (2024) propose a mathematical audit framework for assessing influence
on social media, incorporating legal obligations and ethical standards into the quantification of digital
influence [8].</p>
      <p>Zhu, Guan, and Zhang (2020) introduce a delayed rumor propagation model in complex networks,
which reflects real-world delays in rumor spread and public response. Their model can be extended to
include points where legal regulation may intervene [9].</p>
      <p>El Bhih, Yaagoub, Rachik, and Allali (2024) develop a system of diferential equations to simultaneously
model the spread of rumors and counter-rumors, allowing simulation of regulatory responses such as
fact-checking and public announcements [10].</p>
      <p>Butts, Bollman, and Murillo (2023) present a model evaluating the efectiveness of disinformation
mitigation policies through simulations that demonstrate how interventions like account bans or content
labeling may reduce the spread of false information [11].</p>
      <p>John Lang (2016) models behavioral responses in networks by incorporating social factors into
decision-making, creating opportunities to integrate normative and regulatory influences into such
models [12].</p>
      <p>Cristina Francalanci and colleagues (2015) study how power-law distributions reflect imbalances in
information dissemination, suggesting that influence-driven dynamics could be moderated through
network design or regulatory frameworks [13].</p>
      <p>Collectively, these studies form a multidisciplinary foundation for developing models that synthesize
legal theory with mathematical modeling. They suggest that legal constraints can be introduced
into information difusion models via system parameters such as compliance thresholds, probabilistic
enforcement, or behavioral nudges. Incorporating regulatory norms into models of information difusion
is essential for building robust frameworks to better understand, simulate, and ultimately manage
information dynamics in regulated digital environments [14, 15, 16].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Development of a mathematical model</title>
      <p>There are many diferent variants of mathematical approaches to creating models, because all models
in one way or another use diferent variables to calculate the corresponding weighting coeficients and
parameters. As one of the variants of the mathematical model that describes the compromise between
freedom of speech and ensuring public security and national interests when regulating information
influences in social networks. The model is formulated as an optimization problem in which the choice
of the "intensity of regulation" (denoted by ) afects both the level of freedom of speech and the level
of security and protection of national interests.</p>
      <p>
        The first variable for the future mathematical model is the level of regulation as R (where  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ])
is a parameter characterizing the intensity of the application of measures (no regulation at  = 0 to
maximum regulation at  = 1).
      </p>
      <p>The freedom of speech index should depend on the level of regulation. Let us denote it as F(R). For,
F(R) we propose the following dependence:
(1)
(2)
where  &gt; 0 is the coeficient that determines how quickly the loss of freedom of speech increases with
increasing levels of regulation.</p>
      <p>The public safety index should also depend on the level of regulation. Let us denote it as (). Since
even without additional security measures there can be a certain basic level 0, we propose:
() = 0 + ( −  0) · (1 −  − ),
where 0 – basic level of security (at R=0),  ∈ (0, 1] – the maximum level of safety that can be
approached with increasing regulation,  &gt; 0 – a parameter that determines the growth rate of the
security index.</p>
      <p>After the public security index, we introduce the index of protection of national interests, which we
will denote by  ().  () we define it similarly to (2):
 () = 0 + ( −  0) · (1 −  −  ),
(3)
where 0 – basic level of protection of national interests (at R=0),  ∈ (0, 1] – the maximum
possible level of protection of national interests, which can be approached with increasing regulation,
 &gt; 0 – growth rate.</p>
      <p>Let us consider in more detail . If maximum regulation provides almost complete security, we
can take and  = 1. However, if security cannot reach an ideal level even with maximum regulation
(for example, due to the impossibility of complete control over information flows) or it is necessary to
model a more realistic case where even the most stringent measures do not provide absolute security,
then we can choose , for  &lt; 1 example  ≈ 0.95 , to take into account residual risks. The same
is also true for .</p>
      <p>And perform the interpretation . When small  (for example,  ≈ 0.5. . . 1 ), safety increases
slowly with an increasing regulation level , i.e., a significant increase in regulation is required to
significantly improve safety. Whereas, if  large (for example,  ≈ 3. . . 5 ), safety quickly reaches its
maximum even with a small regulation level , i.e., a small increase in regulation significantly improves
safety.</p>
      <p>To determine  in practical application. The first is to analyze real data. If there is statistical data on
the relationship between the level of regulation (for example, limiting disinformation) and the level
of public safety (reducing social tension or reducing the number of ofenses), it can be selected  by
approximating empirical data.</p>
      <p>Secondly, it is an expert assessment of the impact of the regulation on the relevant safety. If the
regulation has a negligible efect on safety, a smaller one is chosen  (for example, from 0.5 to 1). If
the efect is very significant, a larger one is chosen   (for example, from 3 to 5).</p>
      <p>Optimization methods are then used to select the one  that best matches historical data or expected
security behavior [17, 18].</p>
      <p>For a clearer understanding, let us define a specific case for  by formula (2) (the same is true for
(3)). Let us assume that the initial level of safety is 0 = 0.5, and the maximum level is  = 0.95. If
the regulation is very efective and provides 90% of the safety gain already at  = 0.5, then  can be
about 3 (since −3·0.5 ≈ 0.22 , which quickly reduces the residual safety gain). If the regulation efect
is weaker, and at  = 0.5 safety increases only to 0.7, then  ≈ 1 (since ( − 1 · 0.5) ≈ 0.61 , i.e. the
decline occurs more slowly).</p>
      <p>This confirms that the value  depends on real-world conditions and must be determined through
modeling or data analysis.</p>
      <p>
        Another important part of the mathematical model of legal regulation is the intensity of harmful
informational influences. Let us denote it as I(normalized value from 0 to 1). Regulation reduces efective
harm, so let us introduce the efective level of harm:
 () =  · (1 − ),
(4)
where  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] — the initial level of harmful information impact (for example, the level of
disinformation or propaganda without regulation),  () — the residual level of information threat after the
introduction of a certain level of regulation.
      </p>
      <p>If we consider partial cases, then in the case when there is no regulation ( = 0), then  (0) =
 · (1 − 0) =  , that is, the entire harmful information impact remains unchanged. Conversely, if the
regulation is maximum ( = 1), then  (1) =  · (1 − 1) = 0 , which means the complete elimination
of the information threat.</p>
      <p>At intermediate values,  the threat level decreases linearly with the level of interference. This
formula shows that with increasing regulation, the level of information threat decreases.</p>
      <p>The initial level of harmful information impact  is a quantitative assessment of the level of
disinformation, propaganda, or other undesirable information phenomena in social networks before the
application of any legal regulation ( = 0) to automatically analyze the level of harmful information
impact  and dynamically adjust it in accordance with the level of security () and protection of
national interests  (). Such methods allow assessing the content of publications and determining
the level of their threat to public security and national interests.</p>
      <p>Content assessment is performed using   models that analyze the use of words or phrases
associated with calls for violence, hate speech, threats, or disinformation. Using classification models
such as deep neural networks or statistical analysis methods, the risk level of the content is determined
 (). In this case, regulation  afects the probability of distribution of such content, gradually
reducing its efectiveness.</p>
      <p>When assessing the threat through text, formula (4) for  () takes on certain changes:
Formula (6) shows the contribution of the text to the overall information threat.</p>
      <p>If such a model identifies the text as malicious with probability 0.8, then the initial level is  = 0.8,
and after adjustment ( = 0.3) the efective level is   () = 0.8(1 − 0.3) = 0.56.</p>
      <p>
        User history is an additional risk factor that can increase the initial value . If a user has previously
repeatedly violated the platform policy or spread misinformation, their content may receive a higher
level of risk  (). The impact of this factor can be expressed through a function of the user’s history
where  — the text of the publication,  () — the probability that the text is malicious.
 () = () · (1 − ),
() =  () ·  ,
(5)
(6)
of actions, which is integrated into the model for calculating the overall information threat. In this case,
formula (5) takes on a slightly diferent form, taking into account the user history indicator:
 (, ) =  () · (1 − ) · (1 +  ()),
(7)
where H(u)— user violation history (number of previous blocks, normalized to [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]).
      </p>
      <p>If the user has 5 previous violations, then  () = 0.5, which increases the risk by 50%. For example,
if  () = 0.56, then considering the history of the user’s violations,  () it increases significantly:
 (, ) = 0.561.5 = 0.84. But this approach requires the creation of a certain database, into which
the data on the violations of individual users will be entered.</p>
      <p>Content analysis for disinformation is based on fact-checking using knowledge bases, which allows
assessing the correspondence of statements on social networks to real events. In the case of detection
of fake information, its contribution to the level of information threat  increases, which may require
an increase in the level of regulation  to achieve an acceptable level of security () and protection
of national interests  ().</p>
      <p>For calculations, we introduce a variable that shows the misinformation of the text:
() = 1 − (,  ),
where  — the text of the publication, (,  ) — similarity of the statement with the base
of verified facts, () determines the level of information distortion (the greater the (), the greater
the risk). In this case, the original formula (5) will receive new input data for calculations:
 () =  () · (1 − ) · (1 + ()).</p>
      <p>If the similarity with the fact is 0.2, then () = 1 − 0.2 = 0.8 , which greatly increases the risk
level. When the indicator of misinformation of the text is neglected, the residual level of information
threat is equal to  () = 0.56, but after considering misinformation  () = 0.56 · 1.8 = 1.01.</p>
      <p>Geolocation analysis allows you to identify potentially risky content depending on its location. If the
content comes from regions of increased information threat or geopolitical conflicts, the likelihood of its
manipulative nature increases, which afects the initial value of the information threat  . Geolocation
data can be used as one of the factors in the function of determining the level of security ().</p>
      <p>Geolocation risk indicator () can take on two meanings: either 1, if the content l was posted from
a region that belongs to the list of regions with a potential threat, or 0, if the content lwas posted from
a region that does not belong to the list of regions with a potential threat:
 =
{︃1,  ∈ conflict zone</p>
      <p>0, otherwise
When considering, () formula (6) takes the following form:</p>
      <p>(, ) =  () · (1 − ) · (1 + ()).</p>
      <p>It is worth noting that a gradient scale can be used to assess the risk of content based on geographical
location, which considers the distance from the conflict zone. Such a scale allows for a dynamic
assessment of the risk level () based on its location, rather than simply setting it to 0 or 1, which was
chosen for the simplicity of the example.</p>
      <p>With gradient estimation, formula (10) for (), will take on a new form - exponential decay, ensuring
a slow reduction in risk with distance from the conflict zone:</p>
      <p>() = − ·(, ),
where () — risk level at point l, (, ) — distance from the content location  to the nearest conflict
zone , [(, )] = [],  — the coeficient of risk decline with increasing distance (the larger , the
faster the impact decreases).</p>
      <p>(8)
(9)</p>
      <p>(10)
(11)
(12)</p>
      <p>Based on formula (12) and the list of distances in kilometers, we can create an approximate gradient
scale table for our study, which can be used without formula (12), and analytically select the necessary
data:</p>
      <sec id="sec-3-1">
        <title>This scale table works as follows:</title>
        <p>1. Content published in the very epicenter of the conflict ( = 0) receives maximum risk (() = 1).
2. At up to 10 km, the risk is still high (() = 0.9), as the information may be related to the conflict.
3. At 50–100 km, the risk is significantly reduced, as the possibility of direct contact with the conflict
is less.</p>
        <p>If the content is published at more than 500 km, its connection with the conflict situation is unlikely,
but still not equal to 0 (() = 0.05).</p>
        <p>However, for this software implementation, if you want a fast reduction in risk with distance, you can
increase it  (for example, to 0.01). Conversely, if you want a slower reduction, you should decrease it
 (for example, to 0.002).</p>
        <p>The context of events is also an important factor in modeling the level of threat. Information posted
during critical events, such as elections or social unrest, has an increased risk of manipulation and
disinformation, which afects the function of changing security () and information threat  ().
Considering the time factor allows you to dynamically adjust the level of control R, ensuring that
measures are appropriate to the current situation.</p>
        <p>Consider the increased risk during periods of critical events, which we () can denote using an
exponential decay function like (13):
() = − ·(− 0),
(13)
where () – risk weighting coeficient depending on time; 0 – the moment of the beginning of critical
events;  – the rate of decline in the impact of events.</p>
        <p>The graph obtained based on this code will show how the risk of an event decreases over time. That
is, at low  the risk remains significant for a long time. Conversely, with high  the risk decreases
rapidly after the peak of the event.</p>
        <p>When considering () formula (5) takes the formula (11) with the exception that instead of () we
use ():
 (, ) = () · (1 − ) · (1 + ()).
(14)</p>
        <p>Thus, the integration of all the above factors into a mathematical model allows for a dynamic approach
to regulating information flows in social networks. Regulation  is defined as a function of the risk of
information impact Iand the level of threat to security () and national interests  (), which allows
achieving the optimal balance between protecting society and preserving freedom of speech.</p>
        <p>After obtaining all the necessary intermediate data, you can proceed to calculate the national security
index:
 () = () +  () − 
 (),
(15)
where  () — the overall level of national security at the level of regulation , () — the level of
public safety, obtained from formula (2),  () — the level of protection of national interests, obtained
from formula (3),  () — the residual level of information threat, obtained from formulas (4, 5),  &gt; 0
— the weight coeficient of the impact of an information threat on national security (determines how
much harmful information afects the overall level of security).</p>
        <p>Formula (15) works as follows. The positive efects of regulation () and are added  (), which
increase with increasing regulation , since more controlled space improves public safety and state
protection.</p>
        <p>The negative efect of the information threat is subtracted  (), which decreases as , but still has
a negative impact. This negative impact is scaled by a factor  (for example, if disinformation has a
serious impact on security,  it will be large).</p>
        <p>Regarding interpretation , the following can be said:</p>
        <p>If  it is small ( ≈ 0.1
have a greater impact on security.</p>
        <p>If  the average ( ≈ 0.5</p>
        <p>), the information threat has a weak impact on security. This may mean that
society has a high level of resilience to disinformation, or that other factors (economy, military power)
), the information threat is an important but not a determining factor. For
example, in a country with strong media and a developed fact-checking system, information attacks
can be a serious problem, but not destructive.</p>
        <p>If  large ( ⩾ 1 ), the information threat has a critical impact on national security. This may be
typical of situations where fake news, propaganda, or cyberattacks can cause mass panic, political
destabilization, or even leading to social unrest.</p>
        <p>Let’s consider an example using selected numerical values.</p>
        <p>Let’s assume that public safety () = 0.7, protection of national interests  () = 0.6, information
threat after regulation  () = 0.5, Let’s consider three options , we list them in Table 2.
large, even a moderate level of information threat can significantly reduce security.</p>
        <p>If we choose  for a real situation, then in countries with strong information hygiene ( ≈ 0.1 − 0.3
disinformation has a weak impact. In countries with hybrid warfare, low levels of trust in the media
),
( ≈ 0.5 − 0.8), information attacks can seriously afect security.</p>
        <p>In situations of crisis or information warfare ( ⩾ 1 ), even one massive fake can have catastrophic
consequences.</p>
        <p>Based on the above data and calculations, we see that the coeficient  determines how sensitive
national security is to information threats. It should be adjusted depending on the level of media literacy
of the population, information attacks, and the socio-political situation.</p>
        <p>Returning to formula (15), it is true for this formula that if  is small, then the level of safety  ()
will be low due to the high influence of   ().</p>
        <p>If  it increases, then () they  () grow, and if  () it decreases, then it increases  ().
If the coeficient</p>
        <p>is large, it means that the information threat has a critical impact on national
security, and even a small residual level  () can significantly reduce  ().</p>
        <p>To reflect the trade-of between the diferent components, we introduce a balance index
is a weighted sum of three indices:
(), which
() =  ·  () +   · () +   ·  (),
(16)
where  &gt; 0 — the weight of freedom of speech  (),  &gt; 0 — the importance of public
safety (),  &gt; 0 — the importance of protecting national interests  (),  +  +  = 1 —
normalization of weight coeficients.</p>
        <p>This model allows us to reflect the priorities of society and/or government in matters of information
regulation.</p>
        <p>Let’s consider certain cases of weighting factors and their meanings.</p>
        <p>For the first case, we will choose a high value  (i.e., the priority of freedom of speech). If
 ≈ 0.6 . . . 0.8 , and  ,  are respectively small, this means that freedom of speech is the main
social priority for this state. At the same time, regulation  should be minimal to avoid censorship.
And even if the level of information threats is high, society is ready to take risks to preserve a free
information space. As an example, we can cite liberal democracies (such as the USA or the EU zone),
where freedom of speech is a key value.</p>
        <p>For the second case, we will choose a high value  (priority of public safety). If  ≈ 0.5 . . . 0.7 , then
this means that for this state the main goal is to prevent internal threats, such as social conflicts, violence,
terrorist attacks. At the same time, information that incites crimes or contributes to radicalization must
be strictly controlled. In this case, temporary restrictions on freedom of speech are possible for the sake
of protecting society. As an example, we can cite countries that are fighting terrorism, such as France
after the terrorist attacks of 2015, or states in conditions of political instability.</p>
        <p>And for the third case, consider a high value  (priority of national interests).</p>
        <p>If  ≈ 0.5 . . . 0.8 , this means that in this state there is a focus on protection from external threats,
such as propaganda, information wars, and espionage. Also, in a country with such a system, 
strict measures have been created against foreign information campaigns, as well as possible blocking
or restriction of content that harms the state. As an example, we can show Ukraine during the war
(blocking Russian channels and propaganda) and China (content filtering to control public opinion).</p>
        <p>However, these three individual cases have a specific bias towards one or another’s weight coeficient.
To solve the problem, it is worth choosing the optimal weights  ,  ,  . For this, we ofer three
ready-made models:</p>
        <p>A model for democracy (strong freedom of speech, medium security controls) with minimal regulation
of information.</p>
        <p>Authoritarian control model (strong censorship, priority of state interests). Here, maximum control
of information occurs.</p>
        <p>= 0.1,  = 0.4,  = 0.5.
(19)</p>
        <p>However, these three models are only examples, and of course the choice of weighting factors depends
on the specific policy of the state and the level of threats. In conditions of war or crisis, freedom of
speech may temporarily give way to security and national interests.</p>
        <p>The balance index (16) is the last element of the theoretical description of this mathematical model.
And in general, the mathematical model of legal regulation of the spread of information influences
social networks takes the following form:</p>
        <p>Index functions:</p>
        <p>National security model (medium freedom of speech, strong security control). The balance between
protection and freedom is maintained.</p>
        <p>= 0.6,  = 0.2,  = 0.2.
 = 0.3,  = 0.4,  = 0.3.
(17)
(18)
Balance index:
Optimization problem:
() = 0 + (max −  0) 1 −  − )︁ ,</p>
        <p>︁(
 () = 0 + (max −  0) 1 −  −  )︁ ,</p>
        <p>︁(
ef() =  · (1 − ),
 () = () +  () −  ·</p>
        <p>ef().
() =   () +  () +   ().
(20)
(21)
(22)
(23)
(24)
Max: (),
under the conditions:  () ≥  min,</p>
        <p>By adjusting the parameters (  ,  ,  , 0, , 0, ,  ,  , as well as the weighting factors
 ,  ,  ) it is possible to adapt the model to specific conditions, societal priorities and current
threats. We obtain a tool that can help legislators and experts in determining the optimal level of legal
regulation to achieve a balance between fundamental rights and ensuring national security.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Formulation of the optimization problem</title>
      <p>After creating the theoretical basis of the mathematical model, we proceed to verify it on practical
data. To do this, we set the task: to find the optimal level of regulation * and the corresponding index
values such that the balance index () is maximized, and the levels of freedom of speech and national
security do not fall below the specified minimum threshold values.</p>
      <p>If we formalize this into a mathematical formula, we get the following:</p>
      <sec id="sec-4-1">
        <title>That is:</title>
        <p>
          Find * ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], which maximizes (),
under the conditions:
        </p>
        <p>() ≥  min,
 () ≥   min.
* = arg max ()
subject to  () ≥  min, and</p>
        <p>
          For illustration, we perform parameterization using the following values (all normalized to [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]):
−2 )),
• Freedom of speech index parameter:  = 1 (linear dependence:  () = 1 − ),
• Security index parameters: 0 = 0.5, max = 1,  = 2 (i.e. () = 0.5 + 0.5(1 −  −2 )),
• National interest index parameters: 0 = 0.5, max = 1,  = 2 (i.e.  () = 0.5 + 0.5(1 −
• Harmful informational influence:  = 0.8,
• Damage coeficient:  = 0.5,
• Weighting factors:  = 0.4,  = 0.3,  = 0.3,
• Threshold values: min = 0.7,  min = 0.9.
        </p>
        <p>We perform calculations according to the selected parameters:
So the index of freedom is:
Safety index:</p>
      </sec>
      <sec id="sec-4-2">
        <title>National interest index: Efective harmfulness of information:</title>
        <p>ef() = 0.8 · (1 − ).</p>
        <p>National security index:
() = 0.4(1 − ) + 0.3 [︀ 0.5 + 0.5(1 −  −2 )]︀ +
+ 0.3 [︀ 0.5 + 0.5(1 −  −2 )]︀ .
(25)
(28)
(26)
(27)
(29)
(30)
(23)
(24)
 () = 1 − 
( = 1).
(25)</p>
        <p>That is:
is:</p>
      </sec>
      <sec id="sec-4-3">
        <title>Safety index:</title>
        <p>After creating the theoretical basis of the mathematical model, we will proceed to verify it on practical
data. To do this, we will set the task: to find the optimal level of regulation * and the corresponding
index values, such that the balance index () is maximum, and the levels of freedom of speech and
national security do not fall below the specified minimum threshold values.</p>
        <p>If we formalize this into a mathematical formula, we get the following:</p>
        <p>
          Find * ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], which maximizes (),
under the conditions:
 () ≥  min,
 () ≥   min.
* = arg max ()
        </p>
        <p>subject to  () ≥  min and  () ≥   min.</p>
        <p>
          For illustration, we perform the parameterization using the following values (all values are normalized
to [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]):
1. Parameters of the freedom of speech index:  = 1 (linear dependence:  () = 1 − ).
2. Security index parameters: 0 = 0.5, max = 1,  = 2 (i.e., () = 0.5 + 0.5(1 −  −2 )).
3. Parameters of the national interest index: 0 = 0.5, max = 1,  = 2 (i.e.,  () = 0.5 +
0.5(1 −  −2 )).
        </p>
        <p>4. Harmful informational influence:  = 0.8.
5. Damage coeficient:  = 0.5.
6. Weighting factors:  = 0.4,  = 0.3,  = 0.3.
7. Threshold values: min = 0.7 and  min = 0.9.</p>
        <p>We perform calculations according to the selected parameterization values. So the index of freedom
National Interest Index:</p>
        <p>ef() = 0.8 · (1 − ).
(27)
(28)
(29)
National Security Index:</p>
        <p>Finding the optimal one * is done by calculating the derivative /, setting the condition
/ = 0 (taking into account additional restrictions  () ≥  min ⇒  () ≥ 0.7 (i.e., freedom
of speech should not fall below 70% of the maximum) and  () ≥   min ⇒  () ≥ 0.9
(national security must be at least 90%).</p>
        <p>With this interpretation, with an increase,  and  () increase () (because measures to filter
harmful influences are strengthened, control over the information space is increased), but decreases
 (), because restrictions that may afect freedom of speech are strengthened.</p>
        <p>Determining the optimal value * provides a compromise — a suficient level of security and
protection of national interests is achieved, while freedom of speech does not fall below the minimum
permissible level.</p>
        <p>National security  () considers both the positive efect of increasing () and  () and the
negative impact of residual harmfulness ef(). The condition is that  () exceeds a certain critical
threshold  min.</p>
        <p>Let’s consider a few examples of calculations for diferent values  (numerical values are approximate),
we list them in Table 3.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The conducted research underscores the importance of developing an integrated legal framework
capable of efectively regulating information flows in social networks, while taking into account digital
dynamics and emerging technical risks. The proposed mathematical model demonstrates how regulatory
intensity can be quantitatively linked to key societal indicators such as freedom of speech, public safety,
and national security. Through optimization of regulatory parameters based on measurable criteria,
policymakers can avoid the extremes of both excessive censorship and uncontrolled information disorder.</p>
      <p>However, it is essential to emphasize that the current version of the model remains primarily
theoretical. No empirical data from real-world social networks were incorporated into this study, and
all simulations were performed using hypothetical scenarios. This limitation is explicitly acknowledged.
Future research eforts should focus on empirical validation using open-source or synthetic datasets
and further adaptation of the model to reflect specific socio-cultural and legal environments.</p>
      <p>Such developments are critical for advancing the practical application of mathematical models in
digital policymaking, allowing legislators to establish regulation levels that strike a sustainable balance
between the protection of fundamental rights and the safeguarding of national security in the digital
era.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Al-Azzeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Hadidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Odarchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gnatyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shevchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <article-title>Analysis of selfsimilar trafic models in computer networks</article-title>
          ,
          <source>International Review on Modelling and Simulations</source>
          <volume>10</volume>
          (
          <year>2017</year>
          )
          <fpage>328</fpage>
          -
          <lpage>336</lpage>
          . doi:
          <volume>10</volume>
          .15866/iremos.v10i5.
          <fpage>12009</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O. S.</given-names>
            <surname>Ulichev</surname>
          </string-name>
          ,
          <article-title>Model and methods of spreading information influences in social networks in conditions of information confrontation</article-title>
          ,
          <source>Ph.D. thesis</source>
          , National Aviation University, Kyiv,
          <year>2021</year>
          .
          <article-title>Dissertation for the degree of Candidate of Technical Sciences</article-title>
          , specialty
          <volume>21</volume>
          .
          <year>05</year>
          .01 “
          <article-title>Information Security of the State”</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. K.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <article-title>A mathematical model of news propagation on online social network and a control strategy for rumor spreading</article-title>
          ,
          <source>Social Network Analysis and Mining</source>
          <volume>6</volume>
          (
          <year>2016</year>
          )
          <article-title>57</article-title>
          . doi:
          <volume>10</volume>
          .1007/s13278-016-0366-5.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>