<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Navigating the Digital Services Act: Scenarios of Transparency and User Control in VLOPs&apos; Recommender Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Urbano</forename><surname>Reviglio</surname></persName>
							<email>urbano.reviglio@eui.eu</email>
							<affiliation key="aff0">
								<orgName type="department">Centre for Media Pluralism and Media Freedom</orgName>
								<orgName type="institution">European University Institute</orgName>
								<address>
									<settlement>Fiesole</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Matteo</forename><surname>Fabbri</surname></persName>
							<email>matteo.fabbri@imtlucca.it</email>
							<affiliation key="aff1">
								<orgName type="department">IMT School for Advanced Studies</orgName>
								<address>
									<settlement>Lucca</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Navigating the Digital Services Act: Scenarios of Transparency and User Control in VLOPs&apos; Recommender Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">021C78505F370DDF520C01664EC4A064</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:50+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Digital Services Act</term>
					<term>recommender systems</term>
					<term>platform governance</term>
					<term>user control 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper provides the initial groundwork for more comprehensive research on the normative foundations and design implications of the Digital Services Act, and other new and forthcoming EU regulations, regarding recommender systems operated by Very Large Online Platforms.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>This paper provides the initial groundwork for more comprehensive research on the normative foundations and design implications of the Digital Services Act (DSA) <ref type="bibr" target="#b0">[1]</ref>, and other new and forthcoming EU regulations, especially regarding recommender systems (RSs) operated by Very Large Online Platforms (VLOPs). Specifically, we examine the development of algorithmic transparency and user autonomy under the broader EU regulatory landscape. This preliminary analysis aims to highlight the critical role of nuanced, user-centric design in fostering a transparent and accountable digital and media ecosystem as well as the potential of a comprehensive EU approach to RSs governance.</p><p>In the first part, we provide a critical overview of the interplay among relevant provisions of the DSA-particularly Articles 27 and 38, which pertain to RSs-and how they are expected to be operationalised. This overview outlines how the DSA requirements for transparency and user control might reshape the functioning and accountability of RSs across VLOPs. We then elaborate on how VLOPs might interpret and implement these requirements in both minimal and comprehensive manners. In the second part, we discuss the affordances and design choices that may be required for VLOPs to conform to future guidelines or delegated acts, taking into account the overall EU regulatory framework. This involves a speculative analysis of the design changes and user interface adjustments that may meet the forthcoming transparency and control standards, as well as the new users' rights and VLOPs duties, enshrined in the DSA and other EU regulations, such as the European Media Freedom Act (EMFA) <ref type="bibr" target="#b1">[2]</ref> and the Strengthened Code of Practice on Disinformation (CoP) <ref type="bibr" target="#b2">[3]</ref>. We thus briefly speculate on how the principles set in the EU law and specific provisions might translate into design affordances for users. This interdisciplinary conceptual analysis is informed by existing EU legal frameworks and user-centred RSs design literature.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">An Overview of the Implementation of the DSA Provisions on RSs</head><p>The DSA is the first supranational regulation addressing the transparency and controllability of RSs with the aim of empowering users of online platforms <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. In particular, art. 27(1) requires platform providers to explain "in their terms and conditions, in plain and intelligible language, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters". The rationale of this provision is to "ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed and can influence how information is presented to them" (DSA, recital 70). Therefore, the parameters considered must include, at least, "the criteria which are most significant in determining the information suggested to the recipient of the service" (content) and the reasons for its "relative importance" (ranking) (DSA, art. 27 (2)). Additionally, when options to modify or influence the main parameters are mentioned in the terms and conditions, platforms should provide, in correspondence with the list of ranked recommendations, a "directly and easily accessible" functionality "that allows the recipient of the service to select and to modify at any time their preferred option" (DSA, art. 27 (3)). According to art. 38, VLOPs that use RSs "shall provide at least one option for each of their recommender systems which is not based on profiling". The requirements of art. 27 and 38 have the potential to reshape the interaction between users and online platforms by reverting the traditionally passive role of the former, as they would be able to modify the parameters of the recommendations and therefore contribute to determine their output. However, given that online platforms that are not VLOPs using profiling for their recommendations are not required to let users modify or influence the parameters unless more than one option for recommendation figure in their terms and conditions, platforms would arguably not search for additional compliance burdens voluntarily <ref type="bibr" target="#b5">[6]</ref>. Consequently, users' right to influence directly RSs might not actually come to effect if platforms do not declare to employ more than one RS model <ref type="bibr" target="#b3">[4]</ref>.</p><p>Research on how art. 27 and 38 should be implemented by VLOPs to empower users is supposed to be carried out by the European Centre for Algorithmic Transparency (ECAT) in collaboration with the DSA enforcement team at DG Connect <ref type="bibr" target="#b7">[8]</ref>, but no guidelines or delegated acts on the application of these articles seem to be forthcoming, except for the rules on the performance of audits for VLOPs issued in 2023 <ref type="bibr" target="#b8">[9]</ref>. The application of art. 27 and 38 is an ongoing process whose results differ across VLOPs, while, regarding most online platforms that are not VLOPs, the compliance with these provisions has not been initialised yet. The aim of art. 27 and 38 is, eventually, to foster users' selfdetermination through their direct intervention on the platform's interface, but their implementation undergoes two concurrent risks of ineffectiveness: on the one side, being too technical to be used by the average user; on the other side, providing just explanations without a real possibility of user action.</p><p>Moreover, it should be noted that, according to art. 34 DSA, the design of RSs can bear systemic risks within and beyond the platform environment, impacting, among others, the exercise of fundamental rights, electoral processes, public security, the protection of minors and people's physical and mental wellbeing. Consequently, the fact that RSs are considered a minimal risk AI application according to the AI Act <ref type="bibr" target="#b9">[10]</ref> appears in contradiction with the risk framework of the DSA: in fact, the penultimate version of the AI Act voted by the EU Parliament included social media RSs among high-risk AI technologies, before these were removed from Annex III in the final version <ref type="bibr" target="#b10">[11]</ref>. This apparent inconsistency between the two risk-based regulatory frameworks might reduce the effectiveness of design requirements for RSs that could be advanced through guidelines or implementing acts, if they were to be put forward under the DSA.</p><p>Until now VLOPs have mainly focused on explaining the way in which recommendations are generated and delivered to users with varying levels of granularity rather than on implementing easily accessible functionalities to let users modify the parameters on which recommendations rely.</p><p>Two emblematic examples are Instagram and TikTok, which have published explanations about the parameters that determine the content and the ranking of their recommendations, albeit with different levels of detail. Instagram has implemented "recommender systems cards" <ref type="bibr" target="#b11">[12]</ref> explaining how the output of a RS depends on the different types of content (e.g., Reels, Stories) and recommendation policies (e.g., Explore), while TikTok describes the parameters on which its RS depends in the Help Centre <ref type="bibr" target="#b12">[13]</ref>. In both cases, these explanations do not appear in the terms and conditions, but are linked from there: this may not be compliant with art. 27(1) DSA. Instagram's RSs cards provide detailed information on which signals influence the recommendation, allowing users to understand how their behaviour and interactions with the platform could change the content they see. However, user control is limited to the possibility of mentioning the reasons for disliking a content and listing keywords corresponding to hashtags that one wants to filter out. Although the RSs cards describe a variety of recommenders used by the platform, there is no corresponding functionality allowing users to intervene on the parameters. On the side of TikTok, user control is mainly empowered by the possibility of filtering out "specific words or hashtags from the content preferences section in your settings to stop seeing content with those keywords" <ref type="bibr" target="#b12">[13]</ref>, thereby mirroring Instagram's approach. Also in this case, there is no possibility for users to modify the algorithmic parameters. In both Instagram and TikTok users can opt to see non-personalised content as per art. 38. The application of art. 27 seems to be limited to its explanation requirements (para 2), while the user control provisions (para 3) have not been respected, outlining the risk of transparency washing <ref type="bibr" target="#b13">[14]</ref>. To further advance this analysis, we aim to monitor all VLOPs implementation of the articles in the coming months.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Designing a Best-case Compliance Scenario: Suggestions for a Substantive User Control</head><p>How to improve controllability in RSs has been widely discussed, from algorithmic explainability and discoverability tools to increasing exposure diversity and bursting filter bubbles <ref type="bibr">[15, 16, 17, 18; 19, 20, 21, 22]</ref>. There is an extensive normative debate that underscores the multifaceted need for transparency and control mechanisms in recommender systems, rooted in ethical and democratic principles, corporate social responsibility, and legal obligations <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Empowering users through allowing them to control the system, and even nudging them to do so <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b22">23]</ref>, can enhance autonomy, ensure accountability, and promote a more informed society. While transparency and control mechanisms advance these normative goals, they must be designed carefully to avoid negative repercussions on user experience, platform integrity, and digital well-being. In fact, disclosing elements of the black-box of RSs might enable malicious actors to find new ways to exploit recommendations for unethical aims; too much control can overwhelm users, leading to decision fatigue, or be ignored by most users; individual relevance-based control can even lead to filter bubbles; last but not least, platforms may reasonably fear a decrease in engagement impacting on their business model <ref type="bibr" target="#b23">[24]</ref>. As such, we draw from previous literature to speculate on the eventual implementation of art. 27 and 38.</p><p>In the best-case scenario, compliance with art. 27 and 38 would require VLOPs to put in place effective functionalities for users to intervene on the algorithmic parameters to change the output of the recommendations. These would include implementing various levels of control features and explanations of different complexity, also to meet the heterogeneity of skills across users <ref type="bibr" target="#b24">[25]</ref>. At the low level, users would be able to give feedback on the recommendations they receive, on its topic and on the content creator (like/dislike) and could see explanations in discursive or graphic form (e.g., word clouds). At the higher level, users could be allowed to modify the degree of personalization of the recommendations, e.g., by choosing the percentage of personalised recommendations they want to see and, among these, by including or excluding elements (such as categories, tags, etc.) that constitute the input of the recommendation. They could also choose which data among those resulting from their interaction with the platform cannot be used for profiling-based recommendations.</p><p>If the expression "preferred options" (DSA art.27(3)) were to be interpreted more broadly, so to allow for "algorithmic contestability", namely "the mechanisms for users to understand, construct, shape, and challenge model predictions" <ref type="bibr" target="#b16">[17]</ref> -which are shown to be desired by most users in a recent cross-national study <ref type="bibr" target="#b4">[5]</ref> -a wider set of design affordances could be envisioned. The primary affordances we deem most practical and impactful to implement include:</p><p>1. Tags, which are a common and easy-to-use tool to support users in determining their preferences. Tags are descriptive keywords or labels that provide additional information about a user's inferred preferences. These can partly represent the recommendations criteria that the DSA requires VLOPs to disclose. Tags have been already legally implemented in China through the Internet Information Services Algorithm Recommendation Management <ref type="bibr" target="#b6">[7]</ref>. Personalization tools in China are indeed based on the notion of tags: platforms provide users with functionalities to select or deselect tags that identify their inferred personal interests. In Douyin, for instance, personal tags are divided in macro-categories, such as "food delicacies", "humanistic sciences" and "travel", which are in turn divided in subcategories. For example, the category "food delicacies" is divided into: "scouting restaurants", "enjoying delicacies", "traditional snacks", and "purchasing ingredients". Once a user chooses a category or subcategory of content, it is also given the option to select how much he or she is interested in it and the consequent "weight" in recommendations.</p><p>2. User feedback, which, despite being neglected in the DSA, has the potential to better align recommendations with users' explicitly expressed preferences. Retrospective, deliberative judgement on previous recommendations could indeed align short-term with long-term incentives. This is an emerging method to control the output of AI systems in general, and RSs in particular <ref type="bibr" target="#b21">[22]</ref>. Of course, VLOPs already provide different forms of feedback-giving tools. However, in most cases, these are not easy to find nor particularly granular, they may be available in the app but not in the browser's website, and it is unclear whether and how specific feedback would lead to specific outcomes. Similarly, conversational RSs could be used by platforms to allow users to provide not only explicit feedback to the recommended content they see, but also to directly and actively influence recommendations <ref type="bibr" target="#b25">[26]</ref>, as envisioned by recital 70 DSA.</p><p>3. Proportional opt-out, which refers to a granular approach to personalization that consists in allowing users to decide the ratio of personalised and non-personalised recommendations they receive. This can represent an easy-to-implement solution from a technical perspective and foster a more conscious approach to "algorithmic choice" <ref type="bibr" target="#b20">[21]</ref>, aimed at stressing the risks and opportunities brought by personalised and non-personalised experiences in social media. Indeed, while personalisation may lead to the much discussed filter bubbles and echo chambers <ref type="bibr" target="#b14">[15]</ref>, a nonpersonalised experience may also easily lead to inaccurate, and thus irrelevant, content recommendations. Providing a "proportional opt-out" and allowing users to set a personal balance is not only desirable but also technically viable. Ideally, the user could choose the percentage of items following the personalised RS, and the remaining percentage of items would be non-personalised.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.</head><p>Multiple profiling, which we define as the ability for users to create more than one personalised feed per profile, so that they can choose among different personalised outcomes. This could help users to diversify their informational experience <ref type="bibr" target="#b26">[27]</ref> and, indirectly, strengthen media pluralism. While this design affordance has been explored by considering pre-determined criteria to filter information -such as the algorithmic recommender personae <ref type="bibr" target="#b19">[20]</ref> -to our knowledge this simpler solution has not been tested to assess diversity exposure, and yet it seems that users -especially younger ones -naturally create more accounts to satisfy their need for multiple identities <ref type="bibr" target="#b27">[28]</ref>. Multiple profiling would eventually allow the possibility to create different personalised experiences based on different interests. There is a risk, however, that, if this becomes a common standard practice for users, it may legitimise VLOPs to promote filter bubbles by design.</p><p>To effectively implement a set of design affordances that is aligned with the intent of art. 27, there are some important considerations to be made. First of all, the complexities of 'preferences' should be fully acknowledged. Contrary to common sense, these are rather undetermined, ephemeral, and often ambivalent. Consider how individuals have different "orders" of preferences; "first-order preferences" are expressed in the moment a stimulus or temptation affects our consciousness, whereas "second-order preferences" are the choices we make for ourselves upon further reflection <ref type="bibr" target="#b23">(24)</ref>. The satisfaction of the latter has been somewhat underestimated by VLOPs. On the contrary, by optimising for users' engagement, VLOPs have thrived and mostly stimulated first-order preferences. One of the main normative problems at stake is the apparent trade-off between engagement optimization and users' preferences alignment. By assuming that users always choose what they want (so-called 'revealed preferences'), VLOPs justify the engagement optimization model. Only by allowing a multi-layered control that accommodate both lay and expert users, thus allowing for various levels of customization and understanding <ref type="bibr" target="#b24">[25]</ref>, users can be empowered to effectively meet their preferences and, conversely, RSs can support them in developing, exploring, and understanding their own unique preferences <ref type="bibr" target="#b28">[29]</ref>. It can be questioned, however, whether engagement optimization and users' preferences alignment are naturally in contrast, or whether they can be complemented by design to offer an economically, individually and democraticallysustainable balance.</p><p>In this brief analysis, we have introduced four promising design affordances; the first two are already widely tested features that proved their effectiveness, the second two are original solutions that could be easily implemented. Although there is room for incentivising design affordance standardisation, the DSA does not create incentives for VLOPs to invest in developing parameters that optimise for preferences alignment, medium-term goals, or even the realisation of public values, as <ref type="bibr" target="#b4">[5]</ref> already noted.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Integrating European Principles into VLOPs Recommender Systems: Looking forward</head><p>It should be questioned whether the EU regulatory framework allows for "algorithmic contestability" and the implementation of the design affordances we have previously outlined. In this chapter, we argue that the emerging European regulatory framework can fruitfully complement the DSA's endeavour, influencing possible delegated acts and guidelines within the framework of the DSA. The principle of regulatory consistency within the EU, in fact, mandates that overlapping themes, like media personalization under the EMFA, align with similar provisions in the DSA. Explicit crossreferences between these regulations establish direct legal links that guide their interpretation and implementation, ensuring that developments in media governance are congruent with overarching EU principles and policy objectives. EMFA introduced a right to customise the media offer and to opt out from the default settings of any device or user interface and autonomously tailor the media offers they receive according to their preferences (art. 20). Its focus is not on VLOPs, however. Art. 20 applies only to audiovisual media, therefore to hardware (remote controls that usually have specific buttons, for apps like Netflix and Youtube) or to software menus and shortcuts, smart TV interfaces and applications and search areas. It could be argued that lawmakers were regulating a different sector, avoiding risk of conflict or inconsistency with art. 27 DSA. The right to customise the media offer, however, can be intended as a general right, also enshrined in art. 10 of the European Convention on Human Rights. As a matter of fact, the right to receive and impart information has been recognised as a fundamental point of departure to realise democratic values in the personalised media landscape <ref type="bibr">[30; 19]</ref>. It legitimises positive legal obligations and its violation would represent a systemic risk under art. 34 DSA. Even the Strengthened CoP includes other significant provisions that may further strengthen this right. It reiterates that "Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options" (Measure 19.2).</p><p>EMFA provides other provisions to strengthen this overarching human right. Article 3 asserts the right of recipients (i.e., users) to "receive a plurality of news and current affairs content, produced with respect for editorial freedom of media service providers, to the benefit of the public discourse". Moreover, the newly-established European Board for Media Services (EBMS) (which replaces the previous European Regulators Group for Audiovisual Media Services (ERGA)) is expected to regularly organise a "structured dialogue" between providers of VLOPs, representatives of media service providers and representatives of civil society in order to "foster access to diverse offerings of independent media on very large online platforms" (art. <ref type="bibr" target="#b18">19</ref>). These provisions seem to lay the normative foundations for the implementation of a right to be exposed and customise diverse news and media. How the right to customise the media offer and the provision to receive a plurality of news and current affairs content would interact and unfold, however, remains questionable. The establishment of media service providers which, according to art. 18 EMFA will self-declare in front of VLOPs following editorial standards and criteria of editorial independence, could offer an additional option for users to customise their experience by receiving only or mostly (i.e., prioritise) content from such media. While the effectiveness of self-declarations and thus the quality of these media can be questioned <ref type="bibr" target="#b30">[31]</ref>, they can still set higher standards of media quality and provide a list of (more) reliable news media that users can decide to be exposed to in a way that supports the access and exposure to diverse news and media.</p><p>While EMFA stresses the right of recipients to receive editorially independent content and diverse news and media, the strengthened CoP also aims to "empower users with tools to assess the provenance and edit history or authenticity or accuracy of digital content" (Commitment 20). It also mandates that Signatories "will design and apply products and features (e.g., information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations" (Measure 22.7). As the CoP is expected to become a code of conduct under art. 34 and 35 DSA, it will create de facto legal obligations for VLOPs <ref type="bibr" target="#b31">[32]</ref>. This is a promising legal development. For example, in the CoP it is also mandated to provide "aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns." This is another meaningful provision as it can inform policymakers on the effectiveness of any design choice implemented, and it could be applied for the application of art. 27 and 38.</p><p>Finally, the AI Act may provide room for mandating VLOPs to refrain from exploiting 'first-order preferences' by design via engagement optimization. According to art. 5, AI models using "subliminal techniques" beyond a person's consciousness or that are intentionally manipulative or designed to exploit a person's vulnerability in a manner that causes or likely to cause physical or psychological harm are to be banned. In parallel, the DSA addresses the issue of "dark patterns" (art. 25), which states that social media's interface should not be designed in a way that hinders users' ability to make informed decisions. Also, the Digital Markets Act (DMA) <ref type="bibr" target="#b32">[33]</ref> aligns with this perspective: according to art. 6(3), users shall be allowed to disable the default settings of a gatekeeper's platform that steer them to use further services of the same gatekeeper. This provision empowers users to avoid the default nudging that the gatekeeper adopts to keep users entrenched in the services provided by itself. Relatedly, following art. 6(5) and 6(6), the gatekeeper cannot try to enhance the adoption of its services by up-ranking them unfairly in search results or preventing end-users from switching between different providers. As can be seen, users' autonomy should be promoted by aligning the implementation of the DSA with that of the AI Act and the DMA.</p><p>To sum up, complementing the DSA with the emerging EU regulatory framework may provide the normative foundations for (i) integrating by design control criteria such as authoritativeness (e.g., by allowing to filter media service providers content) and news and media diversity (e.g., by allowing media service providers diversity exposure by design or even "multiple profiling"); (ii) nuancing the balance between personalised and non-personalised content (i.e., "proportional opt-out"); (iii) providing additional information on how users interact with the newly available functionalities to prove, and eventually improve, the effectiveness of these design choices; (iv) preventing manipulative forms of engagement by design, thereby aligning the risk-based provisions of the DSA with those of the AI Act. How such affordances would be implemented is difficult to tell at present. In particular, the opportunity to integrate two fundamental normative principles for media, that is, authoritativeness and "media diversity" is particularly relevant. These, however, are under-specified, nuanced principles that are difficult to translate into practical procedures and code <ref type="bibr" target="#b33">[34]</ref>. While authoritativeness may conflate with media service providers which provide reliable professional content, news and, more broadly, media diversity may be more challenging to translate into code as it is interpreted and can be achieved in many different ways <ref type="bibr" target="#b34">[35,</ref><ref type="bibr" target="#b35">36]</ref>. However, we argue that, despite the technical and political challenges ahead, these perspectives are laid out within the EU regulations having the legal and normative foundations to be implemented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Our contribution lays the foundation for further research on the implementation of art. 27 and 38 of DSA within the wider EU regulatory context impacting the interaction between users and VLOPs' RSs. After outlining the DSA provisions for RSs, we touched upon the choices made by VLOPs for the explanation and user control of recommendations, mentioning the risk of transparency washing. Subsequently, we proposed feasible user control features that VLOPs could implement to reach a best-case scenario of substantial application of art. 27(3) DSA. Finally, to inform our design considerations from a broader regulatory perspective, we explored the connection between the principles in art. 27 and 38 DSA and, in particular, those in EMFA and the CoP, and we argue that there are two main criteria that social media users could integrate in RSs: diversity and authoritativeness. How these principles would translate in specific design choices and transparency disclosures is open for discussion.</p></div>		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">on a single market for digital services and amending Directive 2000/31/EC</title>
				<imprint>
			<publisher>Digital Services Act</publisher>
			<date type="published" when="2022-10">2022/. October 2022</date>
		</imprint>
	</monogr>
	<note>European Parliament and Council, Regulation (EU)</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m">Regulation establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act)</title>
				<imprint>
			<date type="published" when="2024-03-20">20 March 2024</date>
		</imprint>
	</monogr>
	<note>European Parliament and Council</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation" />
		<title level="m">European Commission, Strengthened code of practice on disinformation</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Regulation of news recommenders in the Digital Services Act: empowering David against the very large online Goliath</title>
		<author>
			<persName><forename type="first">N</forename><surname>Helberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Drunen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vrijenhoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Möller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Internet Policy Review</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Contesting personalized recommender systems: a cross-country analysis of user preferences</title>
		<author>
			<persName><forename type="first">C</forename><surname>Starke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Metikoš</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Helberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>De Vreese</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information, Communication &amp; Society</title>
		<imprint>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Self-determination through explanation: an ethical perspective on the implementation of the transparency requirements for recommender systems set by the Digital Services Act of the European Union</title>
		<author>
			<persName><forename type="first">M</forename><surname>Fabbri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2023 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="653" to="661" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Governing platform recommender systems in Europe: insights from China</title>
		<author>
			<persName><forename type="first">U</forename><surname>Reviglio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Santoni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Global Jurist</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="151" to="181" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://algorithmic-trans-parency.ec.europa.eu/about_en" />
		<title level="m">European Centre for Algorithmic Transparency</title>
				<imprint/>
	</monogr>
	<note>About ECAT</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m">Commission Delegated Regulation (EU) 2024/436 of 20 October 2023 supplementing Regulation (EU)</title>
				<imprint>
			<date type="published" when="2022">2022/2065</date>
		</imprint>
	</monogr>
	<note>European Commission. by laying down rules on the performance of audits for very large online platforms and search engines</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m">laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)</title>
				<imprint>
			<date type="published" when="2024-06">June 2024</date>
		</imprint>
	</monogr>
	<note>European Parliament and Council, Regulation (EU) 2024/1689 of 13</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Regulating high-reach AI: on transparency directions in the Digital Services Act</title>
		<author>
			<persName><forename type="first">K</forename><surname>Söderlund</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Engström</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Haresamudram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Larsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Strimling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Internet Policy Review</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="31" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="https://transpar-ency.meta.com/features/explaining-ranking/" />
		<title level="m">Meta Transparency Centre, Our approach to explaining ranking</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<ptr target="https://support.tiktok.com/en/using-tiktok/exploring-videos/how-tiktok-recommends-content" />
		<title level="m">How TikTok recommends content</title>
				<imprint/>
	</monogr>
	<note>TikTok Support</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Transparency washing in the digital age: a corporate agenda of procedural fetishism</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zalnieriute</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Critical Analysis of Law</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page">139</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Breaking the filter bubble: democracy and design</title>
		<author>
			<persName><forename type="first">E</forename><surname>Bozdag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Den Hoven</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ethics and Information Technology</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="249" to="265" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Democratizing algorithmic news recommenders: how to materialize voice in a technologically saturated media ecosystem</title>
		<author>
			<persName><forename type="first">J</forename><surname>Harambam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Helberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Hoboken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society A</title>
		<imprint>
			<biblScope unit="volume">376</biblScope>
			<biblScope unit="page">20180088</biblScope>
			<date type="published" when="2018">2133. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Automated decision support technologies and the legal profession</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Kluttz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">K</forename><surname>Mulligan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Berkeley Technology Law Journal</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="853" to="890" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Thinking outside the black-box: the case for algorithmic sovereignty in social media</title>
		<author>
			<persName><forename type="first">U</forename><surname>Reviglio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Agosti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Social Media + Society</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">To nudge or not to nudge: news recommendation as a tool to achieve online media pluralism</title>
		<author>
			<persName><forename type="first">J</forename><surname>Vermeulen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digital Journalism</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="1671" to="1690" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Putting a human face on the algorithm: co-designing recommender personae to democratize news recommender systems</title>
		<author>
			<persName><forename type="first">L</forename><surname>Van Den Bogaert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Geerts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Harambam</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digital Journalism</title>
		<imprint>
			<biblScope unit="page" from="1" to="21" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">From algorithmic transparency to algorithmic choice: European perspectives on recommender systems and platform regulation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Busch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender Systems: Legal and Ethical Issues</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="31" to="54" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Editorial values for news recommenders: translating principles to engineering</title>
		<author>
			<persName><forename type="first">J</forename><surname>Stray</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">News Quality in the Digital Age</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="151" to="165" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Digital nudging with recommender systems: survey and future directions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Jannach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers in Human Behavior Reports</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page">100052</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Thorburn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bengani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stray</surname></persName>
		</author>
		<ptr target="https://medium.com/understanding-rec-ommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-preferences-in-recommender-systems-82b5a1559157" />
		<title level="m">What does it mean to give someone what they want? The nature of preferences in recommender systems</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">How do different levels of user control affect cognitive load and acceptance of recommendations?</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">D L R P</forename><surname>Cardoso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IntRS@RecSys Workshop</title>
				<meeting>the IntRS@RecSys Workshop</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="35" to="42" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Social influence for societal interest: a pro-ethical framework for improving human decision making through multi-stakeholder recommender systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Fabbri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI &amp; Society</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="995" to="1002" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Review of user interface-facilitated serendipity in recommender systems</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Afridi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Olsson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Interactive Communication Systems and Technologies</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Alts and automediality: compartmentalising the self through multiple social media profiles</title>
		<author>
			<persName><forename type="first">E</forename><surname>Van Der</surname></persName>
		</author>
		<author>
			<persName><surname>Nagel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">M/C Journal</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Each to his own: how different users call for different interaction methods in recommender systems</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">P</forename><surname>Knijnenburg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">J</forename><surname>Reijmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Willemsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth ACM Conference on Recommender Systems</title>
				<meeting>the Fifth ACM Conference on Recommender Systems</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="141" to="148" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Challenged by news personalisation: five perspectives on the right to receive information</title>
		<author>
			<persName><forename type="first">S</forename><surname>Eskens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Helberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Moeller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Media Law</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="259" to="284" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Brogi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Borges</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Carlini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nenadic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bleyer-Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Kermer</surname></persName>
		</author>
		<title level="m">The European Media Freedom Act: media freedom, freedom of expression and pluralism</title>
				<imprint>
			<publisher>Policy Department for Citizens&apos; Rights and Constitutional Affairs</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Codes of conduct in the Digital Services Act: functions, benefits &amp; concerns</title>
		<author>
			<persName><forename type="first">R</forename><surname>Griffin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Technology and Regulation</title>
		<imprint>
			<biblScope unit="page" from="167" to="187" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m">on contestable and fair markets in the digital sector (Digital Markets Act</title>
				<imprint>
			<date type="published" when="2022-09">2022/. September 2022</date>
		</imprint>
	</monogr>
	<note>European Parliament and Council, Regulation (EU)</note>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices</title>
		<author>
			<persName><forename type="first">J</forename><surname>Morley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kinsey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Elhalal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science and Engineering Ethics</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="2141" to="2168" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Diversity by design</title>
		<author>
			<persName><forename type="first">N</forename><surname>Helberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Information Policy</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="441" to="469" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">The unified framework of media diversity: a systematic literature review</title>
		<author>
			<persName><forename type="first">F</forename><surname>Loecherbach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Moeller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Trilling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Van Atteveldt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digital Journalism</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="605" to="642" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
