<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>D. Akbergen);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Algorithmic curation, information security, and public trust on social platforms: Case studies of TikTok and YouTube</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dinara Akbergen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aidiye Aidarbekov</string-name>
          <email>aidiye.aidarbekov@astanait.edu.kz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ha Jin Hwang</string-name>
          <email>hjhwang@astanait.edu.kz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Astana IT University</institution>
          ,
          <addr-line>Mangilik El Avenue 55/11, 010000 Astana</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Maqsut Narikbayev University</institution>
          ,
          <addr-line>Korgalzhyn Highway 8, 010000 Astana</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper examines how algorithmic curation on social platforms affects information security and public trust. We synthesize recent findings on exposure drift, homogeneity, amplification, coordinated inauthentic behavior, and limits of user control, focusing on YouTube and TikTok. We outline an audit and forensics toolkit that combines black-box and counterfactual experiments with provenance and integrity checks, and we propose an operational workflow for oversight: detect, assess, mitigate, and report. Case studies highlight platform-specific dynamics: on YouTube, risks concentrate in narrow topical corridors and extended recommender-only sessions, with faster adaptation in the sidebar than on the homepage; on TikTok, short video affordances enable rapid niche lock-in, stronger coordination signals, and persistence of unwanted content for some users. We discuss governance options, including exposure diversity constraints, external auditability, and privacy-preserving transparency, and we conclude with priorities for reproducible evaluation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;algorithmic curation</kwd>
        <kwd>cybersecurity</kwd>
        <kwd>digital forensics</kwd>
        <kwd>recommendation systems</kwd>
        <kwd>amplification</kwd>
        <kwd>echo chambers</kwd>
        <kwd>coordinated inauthentic behavior</kwd>
        <kwd>TikTok</kwd>
        <kwd>YouTube 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Algorithmic curation is now the default gateway to information on major social platforms. TikTok’s
For You Page (FYP) and YouTube’s recommendations optimize for engagement and predicted
relevance, reshaping how users encounter facts and viewpoints. This has direct implications for
cyber risk</p>
      <p>management: recommender systems may interact with adversarial tactics (e.g.,
coordinated inauthentic behavior) and organic dynamics (e.g., homophily), leading to amplification,
selective exposure, and erosion of public trust.</p>
      <p>
        On YouTube, evidence points to a nuanced risk profile. User-facing and audit studies report mild
ideological echo chambers and a modest right-leaning drift over longer sessions that follow
recommendations, while finding limited systematic “rabbit holes” into extreme content for average
users [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Causal experiments
      </p>
      <p>
        with counterfactual bots (post-2019) further suggest that
recommendations can, on average, moderate consumption relative to user-driven trajectories;
notably, sidebar suggestions “forget” prior far-right preferences after about 30 videos when users
switch to moderate content [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These findings imply that risks depend on topic, session behavior,
and platform design.
      </p>
      <p>
        On TikTok, the risk surface differs due to short-video formats, rapid trend cycles, and the central
role of the FYP. Recent computational work documents coordinated inauthentic behavior adapted to
video-first affordances, synchronized posting, content reuse, and hashtag-sequence overlaps, which
creates distinct detection challenges [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Qualitative accounts complement this picture with reports
of algorithmic persistence, unwanted content recurring despite negative feedback, raising questions
about user control, trust, and exposure diversity in interest-driven feeds [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        These observations motivate combining audit and measurement with digital forensics and
governance. The audit literature systematizes harm classes (discrimination, distortion, exploitation,
misjudgment) and distills effective methods such as sock puppets, scrapes, and crowd studies, while
noting under-audited domains such as TikTok [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Proposals for platform-supported auditing argue
that vetted researcher access to relevance estimators can reconcile transparency with privacy and
intellectual-property protection, enabling routine oversight [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In parallel, media forensics provides
provenance and integrity tools that are essential for attribution and incident response across
platforms [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>This paper makes three contributions to the study of algorithmic risk and platform governance.
First, we consolidate fragmented findings from auditing and media forensics into a unified
operational taxonomy, identifying how amplification, drift, and coordination signals manifest across
TikTok and YouTube. Second, we propose a reproducible oversight workflow that links detection,
assessment, mitigation, and reporting, bridging auditing methods with digital forensics rather than
treating them as separate research tracks. Third, we extend existing governance models by outlining
implementation-ready levers, such as exposure-diversity constraints and privacy-preserving
transparency, and by specifying the complementary roles of platforms, regulators, and researchers.
Together, these contributions move beyond narrative review and provide a transferable framework
for risk management across recommendation systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <sec id="sec-2-1">
        <title>2.1. Audits of recommendation systems</title>
        <p>
          Prior work on recommendation audits has focused on three recurring questions: (i) what harms are
produced or amplified by ranking and personalization, (ii) how to measure them externally with
black-box or user-task methods, and (iii) how to connect audit findings to platform or regulatory
responses. Early studies on YouTube showed that controlled “recommender-only” sessions can lead
to ideological or topical narrowing compared to mixed navigation, and that different surfaces (sidebar
vs homepage) adapt at different speeds [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Short-video and TikTok-style feeds</title>
        <p>
          Later work extended audit techniques to short-video platforms, especially TikTok, where content is
shorter, signals are denser, and coordination is easier to hide. These studies add CIB-relevant
indicators such as media reuse, synchronized posting in short windows, and repeated hashtag
sequences, and show that unwanted content can reappear despite explicit user feedback [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Media forensics and platform-supported auditing</title>
        <p>
          In parallel, media-forensics research developed provenance, integrity, and cross-platform lineage
tools for incident investigation on social media; platform-supported auditing proposals added
privacy-preserving access for vetted researchers, but this line is often treated separately from
recommendation audits [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Our paper joins the two by using forensics as the evidentiary layer
inside an audit-driven risk workflow.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Compliance-oriented social media risk models</title>
        <p>Compliance-oriented models link incident detection with identity abuse, mis/disinformation, and
reporting obligations, but they rarely model recommendation-specific dynamics such as surface-level
adaptation speed or feedback-suppression efficacy. We make this link explicit so that audit findings
on YouTube and TikTok can be integrated into organizational or platform governance [14].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>We use a scoping review with two concise case studies (YouTube, TikTok). The objective is to
synthesize recent empirical findings and practical auditing/forensics methods relevant to cyber-risk
contexts.</p>
      <p>Corpus and selection: the review is based on the set of papers you provided plus references cited
within those papers. We applied backward and forward snowballing inside this corpus. No additional
database queries were run outside the provided materials.</p>
      <p>Synthesis approach: we use narrative synthesis. For each study we extract platform, study design,
key measures, main findings, and caveats; we harmonize terminology across sources to avoid
inconsistent definitions of “echo chambers,” “amplification,” and related constructs (a known issue in
prior reviews [10]). Specific sources are cited in Section 4.</p>
      <p>Comparison criteria: to keep comparisons concrete across platforms we track five indicators used
in the literature: (i) exposure drift (movement of recommendations relative to a neutral or prior
profile); (ii) homogeneity (narrowing of exposure measured by variance or entropy); (iii)
amplification (relative lift in visibility for a targeted content class against matched controls); (iv)
coordinated inauthentic behavior signals (posting synchrony, media reuse, hashtag-sequence
overlap, dense clusters after graph pruning); (v) feedback suppression efficacy (how quickly
unwanted content declines after negative feedback or switching behavior). As summarized in Table
1, we track five indicators for cross-platform comparison.</p>
      <sec id="sec-3-1">
        <title>Indicator</title>
      </sec>
      <sec id="sec-3-2">
        <title>Definition</title>
      </sec>
      <sec id="sec-3-3">
        <title>Measurement Case studies: the YouTube case study summarizes evidence on exposure drift, exposure diversity, and “forgetting” dynamics of sidebar and homepage recommendations from user-task audits and</title>
        <p>counterfactual-bot experiments. The TikTok case study summarizes FYP mechanics, computational
CIB detection adapted to short video, and qualitative evidence on recurring unwanted content
despite user signals. Each case study states the mechanism, the indicator it maps to, and the most
robust findings.</p>
        <p>Scope and ethics: time focus is 2021–2025 with selective foundational items when needed. We
analyze published studies and public audit artifacts only; no personal data are processed and no
interaction with platforms is performed.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Synthesis of evidence</title>
      <sec id="sec-4-1">
        <title>4.1. Exposure patterns and exposure drift</title>
        <p>
          Recommendation feeds set the default order of exposure and shape session trajectories. On YouTube,
large user-based and audit studies find mild ideological echo chambers and a small right-leaning drift
during longer sessions that follow recommendations; effects vary by topic and interaction style [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
Counterfactual-bot experiments indicate that, on average, recommendations moderate partisan
consumption relative to user-chosen paths. The sidebar “forgets” prior far-right preferences after
about 30 consecutive views of moderate content, while the homepage adapts more slowly [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. In
short-video settings such as TikTok, a single stream and rapid feedback accelerate exposure
narrowing once a niche is established [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Amplification and coordinated activity</title>
        <p>
          Two mechanisms dominate risk. Algorithmic amplification can raise the visibility of low-credibility
items under specific viewing patterns, although exposure to credible counter-content and mixed
watch behavior can disrupt emerging bubbles on YouTube [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Coordinated inauthentic behavior on
TikTok exploits video-first affordances, including synchronized posting within short windows,
media reuse, and characteristic hashtag sequences that form dense, short-lived clusters capable of
steering audiences quickly [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. User feedback efficacy</title>
        <p>
          User signals do not always reset the feed effectively. Qualitative studies on TikTok describe
algorithmic persistence, where unwanted content reappears despite “not interested,” dislikes, or
blocking, which reduces perceived control and trust in the feed [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Likely contributors include weak
weighting of negative feedback relative to watch time and replays, aggregation of signals at the
template or sound level rather than the single video, and short feedback cycles that prioritize recency
and engagement over explicit preferences. In practice, users report adopting workarounds such as
rapid scrolling, switching topics, or taking short breaks, yet these tactics produce inconsistent results.
For evaluation, a practical metric is the feedback suppression efficacy rate, defined as the share of
targeted items that disappear from the next N recommendations after explicit negative feedback,
together with time to return if the class resurfaces.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Methods for auditing and forensics</title>
      <sec id="sec-5-1">
        <title>5.1. Audit designs</title>
        <p>
          The aim of auditing is to elicit recommendations under controlled conditions and measure core
indicators (exposure drift, homogeneity, amplification, feedback suppression efficacy). Black-box
audits rely on scripted agents or structured user tasks that traverse specific surfaces (YouTube
homepage, sidebar, autoplay; TikTok FYP). Protocols should predefine seed topics, interaction rules,
and session length; useful tasks include bubble-burst tests that inject credible counter-content and
switch-to-moderate sequences to observe adaptation. Logging needs to capture page state, rank
positions, and interaction events so that drift and diversity can be computed, while login state,
language, and time of day remain fixed. Counterfactual experiments complement this approach: bots
first reproduce real viewing histories, then follow rule-based heuristics (for example, always click the
top sidebar suggestion) to estimate the platform’s causal contribution and time-to-forget on different
surfaces [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Good practice includes account hygiene, preregistered outcome metrics, and replication
across topics.
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Forensic support</title>
        <p>
          Forensics links audit findings to verifiable evidence. Provenance and integrity checks include
metadata consistency, recompression signatures, hash matching, and cross-posting lineage; in
shortvideo settings, template, sound, and hashtag lineage are also informative, while on YouTube channel
and playlist lineage are often key [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Evidence handling should preserve timestamps and context
(screenshots, page captures, network logs), document transformations, and maintain a simple chain
of custody so results are reproducible. Where raw data are sensitive or covered by terms of service,
platform-supported auditing can provide vetted access to aggregated relevance signals that enable
verification without exposing personal data or proprietary models [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Method–goal mappings are
reported in Table 2.
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Operational workflow</title>
        <p>


</p>
        <p>Detect: run targeted audits or monitors to surface exposure drift, homogeneity, amplification
spikes, coordinated patterns, and control failures.</p>
        <p>Assess: quantify indicators against baselines and grade severity by topic and surface.
Mitigate: apply proportionate controls, for example diversity constraints in exposure,
downranking and friction for low-credibility items, boosts for corrective content, and
throttling or takedown for coordinated networks.</p>
        <p>Report: document protocols, data handling, and results, and release reproducible artifacts
where feasible.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Case studies</title>
      <sec id="sec-6-1">
        <title>6.1. YouTube</title>
        <p>Mechanics. Recommendations appear on the homepage and in the sidebar, with autoplay extending
viewing depth. This layout supports long session trajectories and topical corridors.</p>
        <p>
          Findings. Large user-based and audit studies report mild ideological echo chambers and a small
right-leaning drift during longer recommendation-following sessions; effects vary by topic and
interaction style [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Counterfactual-bot experiments indicate that, on average, recommendations
moderate partisan consumption relative to user-chosen paths. The sidebar “forgets” prior far-right
preferences after about 30 consecutive views of moderate content, while the homepage adapts more
slowly [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Misinformation-focused audits show that filter bubbles can form under specific viewing
patterns but can be disrupted by exposure to credible counter-content and mixed watch behavior [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>Indicators and implications. Exposure drift is small on average yet detectable for political
topics. Homogeneity rises when viewers rely only on recommendations and declines when they
interleave search or subscriptions. Amplification is topic dependent. Feedback suppression efficacy is
measurable via the different adaptation speeds of sidebar and homepage. Risk concentrates in narrow
topical corridors and extended recommender-only sessions; forensics should log which surface
(sidebar or homepage) produced each recommendation and capture provenance of high-velocity
videos. A schematic illustration of the different adaptation speeds for the sidebar and the homepage is
provided in Figure 1.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. TikTok</title>
        <p>
          Mechanics. The For You Page is a single, interest-driven stream with rapid feedback and trend
cycles, which accelerates niche lock-in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>
          Findings. Coordinated inauthentic behavior in video-first settings leverages synchronized
posting in short windows, media reuse, and characteristic hashtag sequences that form dense,
shortlived clusters capable of steering large audiences quickly [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Qualitative studies describe algorithmic
persistence: unwanted content can recur despite “not interested,” dislikes, or blocking, which reduces
perceived control and trust in the feed [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>Indicators and implications. Early-session exposure drift is rapid and sensitive to
microengagements. Homogeneity can grow quickly within niches in a single-stream feed. Amplification
often follows template reuse and sound- or hashtag-anchored cascades. CIB signals include posting
synchrony, media reuse, and dense transient clusters. Feedback suppression efficacy is mixed because
of persistence effects. Risk clusters around fast-moving trends and coordinated structures; forensics
should log audio, template, and hashtag lineage alongside account graphs and preserve short time
windows to recover synchronization evidence. A schematic example of a coordinated cluster with
short-window synchrony, media reuse, and shared hashtag sequences is provided in Figure 2.</p>
        <p>Although we do not run new measurements, several recent audits report comparable
exposuredrift magnitudes for scripted viewing tasks. Table 3 summarizes indicative values that explain why
even short recommendation sessions can lead to noticeable ideological or topical narrowing.</p>
        <p>Table 3 shows that even relatively short recommendation sessions can produce measurable
exposure drift. On YouTube, about 30 consecutive recommendations on a politically loaded seed are
enough to increase the share of more homogeneous or extreme items by roughly 12–15%, especially
on the sidebar, which adapts faster than the homepage. On TikTok, longer but still realistic FYP
sessions (40–50 swipes) lead to stronger topical narrowing, with 20–30% more content repeating the
same topic or hashtag. The comparative row highlights that recovery from a biased state is
asymmetric: TikTok can forget a narrow signal faster, while YouTube’s homepage is the slowest
surface to reset. This pattern supports our argument that platform-specific feed dynamics should be
part of cyber risk assessments, because persistent drift increases the chance of users being exposed to
coordinated or harmful narratives again even after they change their behavior.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Discussion</title>
      <p>
        Comparative assessment. Exposure drift is small on average on YouTube but detectable for
political topics; it accelerates when viewers rely only on recommendations and softens when they
interleave search or subscriptions. On TikTok, a single stream and rapid feedback make early-session
drift faster once a niche is established. Homogeneity rises under recommender-only behavior on both
platforms and is especially pronounced in short-video niches. Amplification is topic dependent: on
YouTube it can be disrupted by exposure to credible counter-content and mixed watch behavior [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
while on TikTok reuse of templates, sounds, and hashtags can accelerate scaling within clusters.
Coordinated inauthentic behavior is more visible in video-first settings due to synchronized posting
windows and media reuse that form dense, short-lived clusters [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Feedback suppression efficacy
differs by surface: the YouTube sidebar adapts relatively quickly after sustained switching, whereas
the homepage adjusts more slowly [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]; on TikTok, users report low feedback suppression efficacy for
unwanted content despite negative feedback, consistent with persistence effects [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Governance and implementation trade-offs. Exposure-diversity constraints can be
implemented as soft caps on how many items from the same topic, source, or hashtag a user can see
within a short window. On recommender surfaces this requires access to content-level metadata and
the ability to re-rank items once a cap is reached. The trade-off is that engagement metrics may drop
if highly clickable items are temporarily held back; therefore, platforms need to tune diversity
thresholds per content vertical and A/B test their impact on watch time and user trust.
Privacypreserving transparency, in turn, calls for researcher access to aggregated relevance scores, audit
logs, and model features without exposing individual user histories. This increases platform cost
(data pipelines, access control) and requires regulatory guidance on what counts as “sufficient
transparency” for cyber-risk assessments. Within the detect, assess, mitigate, report workflow,
platforms instrument logging and re-ranking tools, regulators set minimal disclosure and auditability
requirements, and researchers run measurement protocols. Making these roles explicit allows the
workflow to be applied consistently across YouTube, TikTok, and short-video-style feeds.</p>
      <p>Scope and limitations. This is a scoping review based on a defined corpus; results reflect
published studies and public audit artifacts. Platform behavior is time sensitive, and operational
definitions of echo chambers, amplification, and control vary across studies, which can affect
comparability [10]. We therefore emphasized effects that recur across methods and platforms and
noted assumptions where relevant.</p>
      <p>Future directions. Priorities include standardized measures for feedback suppression efficacy
and time to return, cross-platform audits that cover short-video dynamics, evaluation of exposure
diversity policies with user-centric outcomes, and privacy-preserving access models that enable
routine verification at scale.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>This study examined how algorithmic curation affects information security and public trust on
YouTube and TikTok. We synthesized evidence on exposure drift, homogeneity, amplification,
coordinated inauthentic behavior, and the limits of user control, and we linked these mechanisms to
practical auditing and forensic methods. On YouTube, risks concentrate in narrow topical corridors
and long recommender-only sessions, with measurable adaptation differences between sidebar and
homepage. On TikTok, short-video affordances enable rapid niche lock-in, stronger coordination
signals, and persistence of unwanted content for some users.</p>
      <p>We proposed an operational workflow for oversight: detect relevant signals, assess them against
baselines, apply proportionate mitigation, and report with reproducible artifacts. We also outlined
governance options that combine exposure diversity constraints, external auditability, and
privacypreserving transparency. Together, these elements provide a pragmatic path to measure, verify, and
reduce recommendation-driven risks while maintaining relevance and user agency.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgements</title>
      <p>The authors received no external funding for this work. This research is conceptually aligned with
UN SDG 16 (Peace, Justice and Strong Institutions) in its focus on public trust in digital ecosystems.
We thank the DTESI-2025 organizers for providing the submission template and guidelines. Any
remaining errors are our own.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used OpenAI ChatGPT to grammar and spelling
check; clarity editing. Further, the authors used ChatGPT-assisted plotting (matplotlib) to generate
schematic figures (Figure 1 and Figure 2). After using these tools/services, the authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.
[10] D. Hartmann, S. M. Wang, L. Pohlmann, B. Berendt, A systematic review of echo chamber
research: comparative analysis of conceptualizations, operationalizations, and varying
outcomes, J. Comput. Soc. Sci. 8(2) (2025). https://doi.org/10.1007/s42001-025-00381-z.
[11] C. Borgs, J. Chayes, C. Ikeokwu, E. Vitercik, Bursting the Filter Bubble: Disincentivizing Echo</p>
      <p>Chambers in Social Networks, in: Proc. EAAMO ’23, ACM, 2023.
[12] N. D. M. Y. Moroojo, N. D. U. Farooq, N. D. M. A. Madni, N. D. T. Shabbir, N. H. Khalil,
Algorithmic amplification and political discourse: The role of AI in shaping public opinion on
social media in Pakistan, The Critical Review of Social Sciences Studies 3(2) (2025) 2552–2570.
https://doi.org/10.59075/k8ra0b02.
[13] S. Dawson, You can’t say that on TikTok: cxnsxrshxp, algorithmic (in) visibility, and the threat
of representation, Doctoral dissertation, University of British Columbia, 2024.
[14] O. M. Oluoha, A. Odeshina, O. Reis, F. Okpeke, V. Attipoe, O. H. Orieno, Developing
complianceoriented social media risk management models to combat identity fraud and cyber threats, Int. J.
Multidiscip. Res. Growth Eval. 4(1) (2023) 1055-1073.
https://doi.org/10.54660/.ijmrge.2023.4.1.1055-1073.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Brown</surname>
          </string-name>
          , J.
          <string-name>
            <surname>Bisbee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Bonneau</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nagler</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Tucker</surname>
          </string-name>
          , Echo Chambers, Rabbit Holes, and Algorithmic Bias:
          <article-title>How YouTube recommends content to real users</article-title>
          ,
          <source>SSRN Electronic Journal</source>
          (
          <year>2022</year>
          ). https://doi.org/10.2139/ssrn.4114905.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hosseinmardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghasemian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rivera-Lanas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>West</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Watts</surname>
          </string-name>
          ,
          <article-title>Causally estimating the effect of YouTube's recommender system using counterfactual bots</article-title>
          ,
          <source>Proc. Natl. Acad. Sci</source>
          .
          <volume>121</volume>
          (
          <issue>8</issue>
          ) (
          <year>2024</year>
          )
          <article-title>e2313377121</article-title>
          . https://doi.org/10.1073/pnas.2313377121.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Luceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. V.</given-names>
            <surname>Salkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Balasubramanian</surname>
          </string-name>
          , G. Pinto,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sun</surname>
          </string-name>
          , E. Ferrara,
          <article-title>Coordinated Inauthentic Behavior on TikTok: Challenges and Opportunities for Detection in a Video-First Ecosystem</article-title>
          ,
          <source>arXiv preprint arXiv:2505.10867</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Vera</surname>
          </string-name>
          , S. Ghosh, “
          <article-title>They've Over-Emphasized that one search”: Controlling unwanted content on TikTok's For You page</article-title>
          ,
          <source>in: Proc. CHI '25</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2025</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . https://doi.org/10.1145/3706598.3713666
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>J. Bandy,</surname>
          </string-name>
          <article-title>Problematic machine behavior</article-title>
          ,
          <source>Proc. ACM Hum.-Comput. Interact</source>
          .
          <volume>5</volume>
          (
          <issue>CSCW1</issue>
          ) (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          . https://doi.org/10.1145/3449148.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Imana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Korolova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidemann</surname>
          </string-name>
          ,
          <article-title>Having your Privacy Cake and Eating it Too: Platformsupported Auditing of Social Media Algorithms for Public Interest</article-title>
          ,
          <source>Proc. ACM Hum.-Comput. Interact</source>
          .
          <volume>7</volume>
          (
          <issue>CSCW1</issue>
          ) (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . https://doi.org/10.1145/3579610.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Pasquini</surname>
          </string-name>
          , I. Amerini, G. Boato,
          <article-title>Media forensics on social media platforms: a survey</article-title>
          ,
          <source>EURASIP J. Inf. Secur</source>
          .
          <year>2021</year>
          (
          <article-title>1) (</article-title>
          <year>2021</year>
          ). https://doi.org/10.1186/s13635-021-00117-2.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <article-title>For me page: User-centric content curation</article-title>
          ,
          <source>Int. J. Comput. Trends Technol</source>
          .
          <volume>72</volume>
          (
          <issue>1</issue>
          ) (
          <year>2024</year>
          )
          <fpage>19</fpage>
          -
          <lpage>26</lpage>
          . https://doi.org/10.14445/22312803/ijctt-v72i1p104.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>I. Srba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Moro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tomlein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pecher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Simko</surname>
          </string-name>
          , E. Stefancova,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kompan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hrckova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Podrouzek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gavornik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bielikova</surname>
          </string-name>
          ,
          <article-title>Auditing YouTube's recommendation algorithm for misinformation filter bubbles</article-title>
          ,
          <source>ACM Trans. Recommender Syst</source>
          .
          <volume>1</volume>
          (
          <issue>1</issue>
          ) (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . https://doi.org/10.1145/3568392.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>