=Paper= {{Paper |id=Vol-3898/short2 |storemode=property |title=Navigating the Digital Services Act: Scenarios of transparency and user control in VLOPs' recommender systems |pdfUrl=https://ceur-ws.org/Vol-3898/short2.pdf |volume=Vol-3898 |authors=Urbano Reviglio,Matteo Fabbri |dblpUrl=https://dblp.org/rec/conf/normalize/ReviglioF24 }} ==Navigating the Digital Services Act: Scenarios of transparency and user control in VLOPs' recommender systems== https://ceur-ws.org/Vol-3898/short2.pdf
                                Navigating the Digital Services Act: Scenarios of
                                Transparency and User Control in VLOPs’
                                Recommender Systems
                                Urbano Reviglio1,∗,† and Matteo Fabbri2,†
                                1
                                    Centre for Media Pluralism and Media Freedom, European University Institute, Fiesole
                                2
                                    IMT School for Advanced Studies, Lucca, Italy


                                                   Abstract
                                                   This paper provides the initial groundwork for more comprehensive research on the normative foundations
                                                   and design implications of the Digital Services Act, and other new and forthcoming EU regulations,
                                                   regarding recommender systems operated by Very Large Online Platforms.

                                                   Keywords
                                                   Digital Services Act, recommender systems, platform governance, user control 1



                                1. Introduction
                                This paper provides the initial groundwork for more comprehensive research on the normative
                                foundations and design implications of the Digital Services Act (DSA) [1], and other new and
                                forthcoming EU regulations, especially regarding recommender systems (RSs) operated by Very
                                Large Online Platforms (VLOPs). Specifically, we examine the development of algorithmic
                                transparency and user autonomy under the broader EU regulatory landscape. This preliminary
                                analysis aims to highlight the critical role of nuanced, user-centric design in fostering a transparent
                                and accountable digital and media ecosystem as well as the potential of a comprehensive EU
                                approach to RSs governance.
                                   In the first part, we provide a critical overview of the interplay among relevant provisions of the
                                DSA—particularly Articles 27 and 38, which pertain to RSs—and how they are expected to be
                                operationalised. This overview outlines how the DSA requirements for transparency and user control
                                might reshape the functioning and accountability of RSs across VLOPs. We then elaborate on how
                                VLOPs might interpret and implement these requirements in both minimal and comprehensive
                                manners. In the second part, we discuss the affordances and design choices that may be required for
                                VLOPs to conform to future guidelines or delegated acts, taking into account the overall EU
                                regulatory framework. This involves a speculative analysis of the design changes and user interface
                                adjustments that may meet the forthcoming transparency and control standards, as well as the new
                                users’ rights and VLOPs duties, enshrined in the DSA and other EU regulations, such as the European
                                Media Freedom Act (EMFA) [2] and the Strengthened Code of Practice on Disinformation (CoP) [3].
                                We thus briefly speculate on how the principles set in the EU law and specific provisions might
                                translate into design affordances for users. This interdisciplinary conceptual analysis is informed by
                                existing EU legal frameworks and user-centred RSs design literature.




                                NORMalize 2024: The Second Workshop on the Normative Design and Evaluation of Recommender Systems, October 18, 2024,
                                co-located with the ACM Conference on Recommender Systems 2024 (RecSys 2024), Bari, Italy.
                                ∗
                                  Corresponding author.
                                †
                                  These authors contributed equally.
                                    urbano.reviglio@eui.eu (U. Reviglio); matteo.fabbri@imtlucca.it (M. Fabbri)
                                    0000-0001-5948-1476 (U. Reviglio); 0000-0001-8994-2464 (M. Fabbri)
                                              © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. An Overview of the Implementation of the DSA Provisions on RSs
The DSA is the first supranational regulation addressing the transparency and controllability of RSs
with the aim of empowering users of online platforms [4, 5, 6, 7]. In particular, art. 27(1) requires
platform providers to explain “in their terms and conditions, in plain and intelligible language, the
main parameters used in their recommender systems, as well as any options for the recipients of the
service to modify or influence those main parameters”. The rationale of this provision is to “ensure
that recipients of their service are appropriately informed about how recommender systems impact
the way information is displayed and can influence how information is presented to them” (DSA,
recital 70). Therefore, the parameters considered must include, at least, “the criteria which are most
significant in determining the information suggested to the recipient of the service” (content) and
the reasons for its “relative importance” (ranking) (DSA, art. 27 (2)). Additionally, when options to
modify or influence the main parameters are mentioned in the terms and conditions, platforms
should provide, in correspondence with the list of ranked recommendations, a “directly and easily
accessible” functionality “that allows the recipient of the service to select and to modify at any time
their preferred option” (DSA, art. 27 (3)). According to art. 38, VLOPs that use RSs “shall provide at
least one option for each of their recommender systems which is not based on profiling”. The
requirements of art. 27 and 38 have the potential to reshape the interaction between users and online
platforms by reverting the traditionally passive role of the former, as they would be able to modify
the parameters of the recommendations and therefore contribute to determine their output.
However, given that online platforms that are not VLOPs using profiling for their recommendations
are not required to let users modify or influence the parameters unless more than one option for
recommendation figure in their terms and conditions, platforms would arguably not search for
additional compliance burdens voluntarily [6]. Consequently, users’ right to influence directly RSs
might not actually come to effect if platforms do not declare to employ more than one RS model [4].
    Research on how art. 27 and 38 should be implemented by VLOPs to empower users is supposed
to be carried out by the European Centre for Algorithmic Transparency (ECAT) in collaboration with
the DSA enforcement team at DG Connect [8], but no guidelines or delegated acts on the application
of these articles seem to be forthcoming, except for the rules on the performance of audits for VLOPs
issued in 2023 [9]. The application of art. 27 and 38 is an ongoing process whose results differ across
VLOPs, while, regarding most online platforms that are not VLOPs, the compliance with these
provisions has not been initialised yet. The aim of art. 27 and 38 is, eventually, to foster users’ self-
determination through their direct intervention on the platform’s interface, but their implementation
undergoes two concurrent risks of ineffectiveness: on the one side, being too technical to be used by
the average user; on the other side, providing just explanations without a real possibility of user
action.
    Moreover, it should be noted that, according to art. 34 DSA, the design of RSs can bear systemic
risks within and beyond the platform environment, impacting, among others, the exercise of
fundamental rights, electoral processes, public security, the protection of minors and people’s
physical and mental wellbeing. Consequently, the fact that RSs are considered a minimal risk AI
application according to the AI Act [10] appears in contradiction with the risk framework of the
DSA: in fact, the penultimate version of the AI Act voted by the EU Parliament included social media
RSs among high-risk AI technologies, before these were removed from Annex III in the final version
[11]. This apparent inconsistency between the two risk-based regulatory frameworks might reduce
the effectiveness of design requirements for RSs that could be advanced through guidelines or
implementing acts, if they were to be put forward under the DSA.
    Until now VLOPs have mainly focused on explaining the way in which recommendations are
generated and delivered to users with varying levels of granularity rather than on implementing
easily accessible functionalities to let users modify the parameters on which recommendations rely.
Two emblematic examples are Instagram and TikTok, which have published explanations about the
parameters that determine the content and the ranking of their recommendations, albeit with
different levels of detail. Instagram has implemented “recommender systems cards” [12] explaining
how the output of a RS depends on the different types of content (e.g., Reels, Stories) and
recommendation policies (e.g., Explore), while TikTok describes the parameters on which its RS
depends in the Help Centre [13]. In both cases, these explanations do not appear in the terms and
conditions, but are linked from there: this may not be compliant with art. 27(1) DSA. Instagram’s RSs
cards provide detailed information on which signals influence the recommendation, allowing users
to understand how their behaviour and interactions with the platform could change the content they
see. However, user control is limited to the possibility of mentioning the reasons for disliking a
content and listing keywords corresponding to hashtags that one wants to filter out. Although the
RSs cards describe a variety of recommenders used by the platform, there is no corresponding
functionality allowing users to intervene on the parameters. On the side of TikTok, user control is
mainly empowered by the possibility of filtering out “specific words or hashtags from the content
preferences section in your settings to stop seeing content with those keywords” [13], thereby
mirroring Instagram’s approach. Also in this case, there is no possibility for users to modify the
algorithmic parameters. In both Instagram and TikTok users can opt to see non-personalised content
as per art. 38. The application of art. 27 seems to be limited to its explanation requirements (para 2),
while the user control provisions (para 3) have not been respected, outlining the risk of transparency
washing [14]. To further advance this analysis, we aim to monitor all VLOPs implementation of the
articles in the coming months.

3. Designing a Best-case Compliance Scenario: Suggestions for a
   Substantive User Control
How to improve controllability in RSs has been widely discussed, from algorithmic explainability
and discoverability tools to increasing exposure diversity and bursting filter bubbles [15, 16, 17, 18;
19, 20, 21, 22]. There is an extensive normative debate that underscores the multifaceted need for
transparency and control mechanisms in recommender systems, rooted in ethical and democratic
principles, corporate social responsibility, and legal obligations [4, 5]. Empowering users through
allowing them to control the system, and even nudging them to do so [19, 23], can enhance
autonomy, ensure accountability, and promote a more informed society. While transparency and
control mechanisms advance these normative goals, they must be designed carefully to avoid
negative repercussions on user experience, platform integrity, and digital well-being. In fact,
disclosing elements of the black-box of RSs might enable malicious actors to find new ways to exploit
recommendations for unethical aims; too much control can overwhelm users, leading to decision
fatigue, or be ignored by most users; individual relevance-based control can even lead to filter
bubbles; last but not least, platforms may reasonably fear a decrease in engagement impacting on
their business model [24]. As such, we draw from previous literature to speculate on the eventual
implementation of art. 27 and 38.
    In the best-case scenario, compliance with art. 27 and 38 would require VLOPs to put in place
effective functionalities for users to intervene on the algorithmic parameters to change the output of
the recommendations. These would include implementing various levels of control features and
explanations of different complexity, also to meet the heterogeneity of skills across users [25]. At the
low level, users would be able to give feedback on the recommendations they receive, on its topic
and on the content creator (like/dislike) and could see explanations in discursive or graphic form
(e.g., word clouds). At the higher level, users could be allowed to modify the degree of personalization
of the recommendations, e.g., by choosing the percentage of personalised recommendations they
want to see and, among these, by including or excluding elements (such as categories, tags, etc.) that
constitute the input of the recommendation. They could also choose which data among those
resulting from their interaction with the platform cannot be used for profiling-based
recommendations.
    If the expression “preferred options” (DSA art.27(3)) were to be interpreted more broadly, so to
allow for “algorithmic contestability”, namely “the mechanisms for users to understand, construct,
shape, and challenge model predictions” [17] – which are shown to be desired by most users in a
recent cross-national study [5] - a wider set of design affordances could be envisioned. The primary
affordances we deem most practical and impactful to implement include:

1. Tags, which are a common and easy-to-use tool to support users in determining their preferences.
Tags are descriptive keywords or labels that provide additional information about a user’s inferred
preferences. These can partly represent the recommendations criteria that the DSA requires VLOPs
to disclose. Tags have been already legally implemented in China through the Internet Information
Services Algorithm Recommendation Management [7]. Personalization tools in China are indeed
based on the notion of tags: platforms provide users with functionalities to select or deselect tags
that identify their inferred personal interests. In Douyin, for instance, personal tags are divided in
macro-categories, such as “food delicacies”, “humanistic sciences” and “travel”, which are in turn
divided in subcategories. For example, the category “food delicacies” is divided into: “scouting
restaurants”, “enjoying delicacies”, “traditional snacks”, and “purchasing ingredients”. Once a user
chooses a category or subcategory of content, it is also given the option to select how much he or
she is interested in it and the consequent “weight” in recommendations.

2. User feedback, which, despite being neglected in the DSA, has the potential to better align
recommendations with users’ explicitly expressed preferences. Retrospective, deliberative judgement
on previous recommendations could indeed align short-term with long-term incentives. This is an
emerging method to control the output of AI systems in general, and RSs in particular [22]. Of course,
VLOPs already provide different forms of feedback-giving tools. However, in most cases, these are
not easy to find nor particularly granular, they may be available in the app but not in the browser’s
website, and it is unclear whether and how specific feedback would lead to specific outcomes.
Similarly, conversational RSs could be used by platforms to allow users to provide not only explicit
feedback to the recommended content they see, but also to directly and actively influence
recommendations [26], as envisioned by recital 70 DSA.

3. Proportional opt-out, which refers to a granular approach to personalization that consists in
allowing users to decide the ratio of personalised and non-personalised recommendations they
receive. This can represent an easy-to-implement solution from a technical perspective and foster a
more conscious approach to “algorithmic choice” [21], aimed at stressing the risks and opportunities
brought by personalised and non-personalised experiences in social media. Indeed, while
personalisation may lead to the much discussed filter bubbles and echo chambers [15], a non-
personalised experience may also easily lead to inaccurate, and thus irrelevant, content
recommendations. Providing a “proportional opt-out” and allowing users to set a personal balance is
not only desirable but also technically viable. Ideally, the user could choose the percentage of items
following the personalised RS, and the remaining percentage of items would be non-personalised.

4. Multiple profiling, which we define as the ability for users to create more than one personalised
feed per profile, so that they can choose among different personalised outcomes. This could help
users to diversify their informational experience [27] and, indirectly, strengthen media pluralism.
While this design affordance has been explored by considering pre-determined criteria to filter
information - such as the algorithmic recommender personae [20] - to our knowledge this simpler
solution has not been tested to assess diversity exposure, and yet it seems that users - especially
younger ones - naturally create more accounts to satisfy their need for multiple identities [28].
Multiple profiling would eventually allow the possibility to create different personalised experiences
based on different interests. There is a risk, however, that, if this becomes a common standard
practice for users, it may legitimise VLOPs to promote filter bubbles by design.
To effectively implement a set of design affordances that is aligned with the intent of art. 27, there
are some important considerations to be made. First of all, the complexities of ‘preferences’ should
be fully acknowledged. Contrary to common sense, these are rather undetermined, ephemeral, and
often ambivalent. Consider how individuals have different “orders” of preferences; “first-order
preferences” are expressed in the moment a stimulus or temptation affects our consciousness,
whereas “second-order preferences” are the choices we make for ourselves upon further reflection
(24). The satisfaction of the latter has been somewhat underestimated by VLOPs. On the contrary,
by optimising for users’ engagement, VLOPs have thrived and mostly stimulated first-order
preferences. One of the main normative problems at stake is the apparent trade-off between
engagement optimization and users’ preferences alignment. By assuming that users always choose
what they want (so-called ‘revealed preferences’), VLOPs justify the engagement optimization
model. Only by allowing a multi-layered control that accommodate both lay and expert users, thus
allowing for various levels of customization and understanding [25], users can be empowered to
effectively meet their preferences and, conversely, RSs can support them in developing, exploring,
and understanding their own unique preferences [29]. It can be questioned, however, whether
engagement optimization and users’ preferences alignment are naturally in contrast, or whether they
can be complemented by design to offer an economically, individually and democratically-
sustainable balance.
    In this brief analysis, we have introduced four promising design affordances; the first two are
already widely tested features that proved their effectiveness, the second two are original solutions
that could be easily implemented. Although there is room for incentivising design affordance
standardisation, the DSA does not create incentives for VLOPs to invest in developing parameters
that optimise for preferences alignment, medium-term goals, or even the realisation of public values,
as [5] already noted.

4. Integrating European Principles into VLOPs Recommender
   Systems: Looking forward
It should be questioned whether the EU regulatory framework allows for “algorithmic contestability”
and the implementation of the design affordances we have previously outlined. In this chapter, we
argue that the emerging European regulatory framework can fruitfully complement the DSA’s
endeavour, influencing possible delegated acts and guidelines within the framework of the DSA. The
principle of regulatory consistency within the EU, in fact, mandates that overlapping themes, like
media personalization under the EMFA, align with similar provisions in the DSA. Explicit cross-
references between these regulations establish direct legal links that guide their interpretation and
implementation, ensuring that developments in media governance are congruent with overarching
EU principles and policy objectives.
    EMFA introduced a right to customise the media offer and to opt out from the default settings of
any device or user interface and autonomously tailor the media offers they receive according to their
preferences (art. 20). Its focus is not on VLOPs, however. Art. 20 applies only to audiovisual media,
therefore to hardware (remote controls that usually have specific buttons, for apps like Netflix and
Youtube) or to software menus and shortcuts, smart TV interfaces and applications and search areas.
It could be argued that lawmakers were regulating a different sector, avoiding risk of conflict or
inconsistency with art. 27 DSA. The right to customise the media offer, however, can be intended as
a general right, also enshrined in art. 10 of the European Convention on Human Rights. As a matter
of fact, the right to receive and impart information has been recognised as a fundamental point of
departure to realise democratic values in the personalised media landscape [30; 19]. It legitimises
positive legal obligations and its violation would represent a systemic risk under art. 34 DSA. Even
the Strengthened CoP includes other significant provisions that may further strengthen this right. It
reiterates that “Signatories will provide options for the recipients of the service to select and to
modify at any time their preferred options for relevant recommender systems, including giving users
transparency about those options” (Measure 19.2).
    EMFA provides other provisions to strengthen this overarching human right. Article 3 asserts the
right of recipients (i.e., users) to “receive a plurality of news and current affairs content, produced
with respect for editorial freedom of media service providers, to the benefit of the public discourse”.
Moreover, the newly-established European Board for Media Services (EBMS) (which replaces the
previous European Regulators Group for Audiovisual Media Services (ERGA)) is expected to
regularly organise a “structured dialogue” between providers of VLOPs, representatives of media
service providers and representatives of civil society in order to “foster access to diverse offerings of
independent media on very large online platforms” (art. 19). These provisions seem to lay the
normative foundations for the implementation of a right to be exposed and customise diverse news
and media. How the right to customise the media offer and the provision to receive a plurality of
news and current affairs content would interact and unfold, however, remains questionable. The
establishment of media service providers which, according to art. 18 EMFA will self-declare in front
of VLOPs following editorial standards and criteria of editorial independence, could offer an
additional option for users to customise their experience by receiving only or mostly (i.e., prioritise)
content from such media. While the effectiveness of self-declarations and thus the quality of these
media can be questioned [31], they can still set higher standards of media quality and provide a list
of (more) reliable news media that users can decide to be exposed to in a way that supports the access
and exposure to diverse news and media.
    While EMFA stresses the right of recipients to receive editorially independent content and diverse
news and media, the strengthened CoP also aims to “empower users with tools to assess the
provenance and edit history or authenticity or accuracy of digital content” (Commitment 20). It also
mandates that Signatories “will design and apply products and features (e.g., information panels,
banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative
sources on topics of particular public and societal interest or in crisis situations” (Measure 22.7). As
the CoP is expected to become a code of conduct under art. 34 and 35 DSA, it will create de facto
legal obligations for VLOPs [32]. This is a promising legal development. For example, in the CoP it
is also mandated to provide “aggregated information on effective user settings, such as the number
of times users have actively engaged with these settings within the reporting period or over a sample
representative timeframe, and clearly denote shifts in configuration patterns.” This is another
meaningful provision as it can inform policymakers on the effectiveness of any design choice
implemented, and it could be applied for the application of art. 27 and 38.
    Finally, the AI Act may provide room for mandating VLOPs to refrain from exploiting ‘first-order
preferences’ by design via engagement optimization. According to art. 5, AI models using “subliminal
techniques” beyond a person’s consciousness or that are intentionally manipulative or designed to
exploit a person’s vulnerability in a manner that causes or likely to cause physical or psychological
harm are to be banned. In parallel, the DSA addresses the issue of “dark patterns” (art. 25), which
states that social media’s interface should not be designed in a way that hinders users' ability to make
informed decisions. Also, the Digital Markets Act (DMA) [33] aligns with this perspective: according
to art. 6(3), users shall be allowed to disable the default settings of a gatekeeper’s platform that steer
them to use further services of the same gatekeeper. This provision empowers users to avoid the
default nudging that the gatekeeper adopts to keep users entrenched in the services provided by
itself. Relatedly, following art. 6(5) and 6(6), the gatekeeper cannot try to enhance the adoption of its
services by up-ranking them unfairly in search results or preventing end-users from switching
between different providers. As can be seen, users’ autonomy should be promoted by aligning the
implementation of the DSA with that of the AI Act and the DMA.
    To sum up, complementing the DSA with the emerging EU regulatory framework may provide
the normative foundations for (i) integrating by design control criteria such as authoritativeness (e.g.,
by allowing to filter media service providers content) and news and media diversity (e.g., by allowing
media service providers diversity exposure by design or even “multiple profiling”); (ii) nuancing the
balance between personalised and non-personalised content (i.e., “proportional opt-out”); (iii)
providing additional information on how users interact with the newly available functionalities to
prove, and eventually improve, the effectiveness of these design choices; (iv) preventing
manipulative forms of engagement by design, thereby aligning the risk-based provisions of the DSA
with those of the AI Act. How such affordances would be implemented is difficult to tell at present.
In particular, the opportunity to integrate two fundamental normative principles for media, that is,
authoritativeness and “media diversity” is particularly relevant. These, however, are under-specified,
nuanced principles that are difficult to translate into practical procedures and code [34]. While
authoritativeness may conflate with media service providers which provide reliable professional
content, news and, more broadly, media diversity may be more challenging to translate into code as
it is interpreted and can be achieved in many different ways [35, 36]. However, we argue that, despite
the technical and political challenges ahead, these perspectives are laid out within the EU regulations
having the legal and normative foundations to be implemented.

5. Conclusion
Our contribution lays the foundation for further research on the implementation of art. 27 and 38 of
DSA within the wider EU regulatory context impacting the interaction between users and VLOPs’
RSs. After outlining the DSA provisions for RSs, we touched upon the choices made by VLOPs for
the explanation and user control of recommendations, mentioning the risk of transparency washing.
Subsequently, we proposed feasible user control features that VLOPs could implement to reach a
best-case scenario of substantial application of art. 27(3) DSA. Finally, to inform our design
considerations from a broader regulatory perspective, we explored the connection between the
principles in art. 27 and 38 DSA and, in particular, those in EMFA and the CoP, and we argue that
there are two main criteria that social media users could integrate in RSs: diversity and
authoritativeness. How these principles would translate in specific design choices and transparency
disclosures is open for discussion.

References
[1] European Parliament and Council, Regulation (EU) 2022/2065 of 19 October 2022 on a single
    market for digital services and amending Directive 2000/31/EC (Digital Services Act).
[2] European Parliament and Council, Regulation establishing a common framework for media ser-
    vices in the internal market and amending Directive 2010/13/EU (European Media Freedom Act),
    20 March 2024.
[3] European Commission, Strengthened code of practice on disinformation. URL: https://digital-
    strategy.ec.europa.eu/en/policies/code-practice-disinformation.
[4] N. Helberger, M. van Drunen, S. Vrijenhoek, J. Möller, Regulation of news recommenders in the
    Digital Services Act: empowering David against the very large online Goliath, Internet Policy
    Review 26 (2021).
[5] C. Starke, L. Metikoš, N. Helberger, C. de Vreese, Contesting personalized recommender sys-
    tems: a cross-country analysis of user preferences, Information, Communication & Society (2024)
    1–20.
[6] M. Fabbri, Self-determination through explanation: an ethical perspective on the implementa-
    tion of the transparency requirements for recommender systems set by the Digital Services Act
    of the European Union, Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
    (2023) 653–661.
[7] U. Reviglio, G. Santoni, Governing platform recommender systems in Europe: insights from
    China, Global Jurist 23(2) (2023) 151–181.
[8] European Centre for Algorithmic Transparency, About ECAT. URL: https://algorithmic-trans-
    parency.ec.europa.eu/about_en.
[9] European Commission, Commission Delegated Regulation (EU) 2024/436 of 20 October 2023
    supplementing Regulation (EU) 2022/2065 by laying down rules on the performance of audits
    for very large online platforms and search engines.
[10] European Parliament and Council, Regulation (EU) 2024/1689 of 13 June 2024 laying down har-
     monised rules on artificial intelligence (Artificial Intelligence Act).
[11] K. Söderlund, E. Engström, K. Haresamudram, S. Larsson, P. Strimling, Regulating high-reach
     AI: on transparency directions in the Digital Services Act, Internet Policy Review 13(1) (2024) 1–
     31.
[12] Meta Transparency Centre, Our approach to explaining ranking. URL: https://transpar-
     ency.meta.com/features/explaining-ranking/.
[13] TikTok Support, How TikTok recommends content. URL: https://support.tiktok.com/en/using-
     tiktok/exploring-videos/how-tiktok-recommends-content.
[14] M. Zalnieriute, Transparency washing in the digital age: a corporate agenda of procedural fet-
     ishism, Critical Analysis of Law 8 (2021) 139.
[15] E. Bozdag, J. van den Hoven, Breaking the filter bubble: democracy and design, Ethics and Infor-
     mation Technology 17 (2015) 249–265.
[16] J. Harambam, N. Helberger, J. van Hoboken, Democratizing algorithmic news recommenders:
     how to materialize voice in a technologically saturated media ecosystem, Philosophical Transac-
     tions of the Royal Society A 376(2133) (2018) 20180088.
[17] D. N. Kluttz, D. K. Mulligan, Automated decision support technologies and the legal profession,
     Berkeley Technology Law Journal 34(3) (2019) 853–890.
[18] U. Reviglio, C. Agosti, Thinking outside the black-box: the case for algorithmic sovereignty in
     social media, Social Media + Society 6(2) (2020).
[19] J. Vermeulen, To nudge or not to nudge: news recommendation as a tool to achieve online media
     pluralism, Digital Journalism 10(10) (2022) 1671–1690.
[20] L. van den Bogaert, D. Geerts, J. Harambam, Putting a human face on the algorithm: co-design-
     ing recommender personae to democratize news recommender systems, Digital Journalism
     (2022) 1–21.
[21] C. Busch, From algorithmic transparency to algorithmic choice: European perspectives on rec-
     ommender systems and platform regulation, in: Recommender Systems: Legal and Ethical Issues,
     Springer International Publishing, Cham (2023) 31–54.
[22] J. Stray, Editorial values for news recommenders: translating principles to engineering, in: News
     Quality in the Digital Age, Routledge (2023) 151–165.
[23] M. Jesse, D. Jannach, Digital nudging with recommender systems: survey and future directions,
     Computers in Human Behavior Reports 3 (2021) 100052.
[24] L. Thorburn, P. Bengani, J. Stray, What does it mean to give someone what they want? The
     nature of preferences in recommender systems. URL: https://medium.com/understanding-rec-
     ommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-preferences-
     in-recommender-systems-82b5a1559157.
[25] Y. Jin, B. D. L. R. P. Cardoso, K. Verbert, How do different levels of user control affect cognitive
     load and acceptance of recommendations?, in: Proceedings of the IntRS@RecSys Workshop (2017)
     35–42.
[26] M. Fabbri, Social influence for societal interest: a pro-ethical framework for improving human
     decision making through multi-stakeholder recommender systems, AI & Society 38(2) (2023)
     995–1002.
[27] A. H. Afridi, T. Olsson, Review of user interface-facilitated serendipity in recommender systems,
     International Journal of Interactive Communication Systems and Technologies 12(1) (2023) 1–19.
[28] E. van der Nagel, Alts and automediality: compartmentalising the self through multiple social
     media profiles, M/C Journal 21(2) (2018).
[29] B. P. Knijnenburg, N. J. Reijmer, M. C. Willemsen, Each to his own: how different users call for
     different interaction methods in recommender systems, in: Proceedings of the Fifth ACM Confer-
     ence on Recommender Systems (2011) 141–148.
[30] S. Eskens, N. Helberger, J. Moeller, Challenged by news personalisation: five perspectives on the
     right to receive information, Journal of Media Law 9(2) (2017) 259–284.
[31] E. Brogi, D. Borges, R. Carlini, I. Nenadic, K. Bleyer-Simon, J. E. Kermer, et al., The European
     Media Freedom Act: media freedom, freedom of expression and pluralism, Policy Department
     for Citizens’ Rights and Constitutional Affairs (2023).
[32] R. Griffin, Codes of conduct in the Digital Services Act: functions, benefits & concerns, Technol-
     ogy and Regulation (2024) 167–187.
[33] European Parliament and Council, Regulation (EU) 2022/1925 of 14 September 2022 on contest-
     able and fair markets in the digital sector (Digital Markets Act).
[34] J. Morley, L. Floridi, L. Kinsey, A. Elhalal, From what to how: an initial review of publicly avail-
     able AI ethics tools, methods and research to translate principles into practices, Science and En-
     gineering Ethics 26(4) (2020) 2141–2168.
[35] N. Helberger, Diversity by design, Journal of Information Policy 1 (2011) 441–469.
[36] F. Loecherbach, J. Moeller, D. Trilling, W. van Atteveldt, The unified framework of media diver-
     sity: a systematic literature review, Digital Journalism 8(5) (2020) 605–642.