<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Report on NORMalize: The Second Workshop on the Normative Design and Evaluation of Recommender Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alain</forename><surname>Starke</surname></persName>
							<email>a.d.starke@uva.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">University of Bergen</orgName>
								<address>
									<settlement>Bergen</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sanne</forename><surname>Vrijenhoek</surname></persName>
							<email>s.vrijenhoek@uva.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lien</forename><surname>Michiels</surname></persName>
							<email>lien.michiels@uantwerpen.be</email>
							<affiliation key="aff2">
								<orgName type="institution">University of Antwerp</orgName>
								<address>
									<settlement>Antwerp</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
							<affiliation key="aff5">
								<orgName type="department">imec-SMIT</orgName>
								<orgName type="institution">Vrije Universiteit Brussel</orgName>
								<address>
									<settlement>Brussels</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Johannes</forename><surname>Kruse</surname></persName>
							<email>johannes.kruse@eb.dk</email>
							<affiliation key="aff3">
								<orgName type="institution">JP/Politikens Media Group</orgName>
								<address>
									<settlement>Copenhagen</settlement>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
							<affiliation key="aff6">
								<orgName type="institution">Technical University of Denmark</orgName>
								<address>
									<settlement>Kongens Lyngby</settlement>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nava</forename><surname>Tintarev</surname></persName>
							<email>n.tintarev@maastrichtuniversity.nl</email>
							<affiliation key="aff4">
								<orgName type="institution">Maastricht University</orgName>
								<address>
									<settlement>Maastricht</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Report on NORMalize: The Second Workshop on the Normative Design and Evaluation of Recommender Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">070D3DC8C840EC78E756E032771E2AB6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:50+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>normative thinking</term>
					<term>normative design</term>
					<term>recommender systems</term>
					<term>norms</term>
					<term>values</term>
					<term>value-sensitive design</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recommender systems are among the most widely used applications of artificial intelligence. Because of their widespread use, it is important that practitioners and researchers think about the impact they may have on users, society, and other stakeholders. To that effect, the NORMalize workshop seeks to introduce normative thinking, to consider the norms and values that underpin recommender systems in the recommender systems community. The objective of NORMalize is to bring together a growing community of researchers and practitioners across disciplines who want to think about the norms and values that should be considered in the design and evaluation of recommender systems, and further educate them on how to reflect on, prioritise, and operationalise such norms and values. This document is a report on the second NORMalize workshop, co-located with ACM RecSys '24 in Bari, Italy.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The possible societal impact of recommender systems is becoming increasingly important for the systems' designers <ref type="bibr" target="#b0">[1]</ref>. This is underlined by the increased importance of so-called 'beyondaccuracy' metrics in recommender research. These include methods that devote attention to notions of fairness, such as statistical parity or equality of opportunity in the design and evaluation of recommender systems <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. However, this also means that many values could be considered when developing recommender systems, of which fairness towards the end-users of the system is only but one example <ref type="bibr" target="#b3">[4]</ref>.</p><p>Identifying and balancing values of recommender systems requires so-called normative thinking and decision-making <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. Normative thinking requires recommender designers to reflect on how or what the system should be, rather than focusing on what the current state of the system (output) is. Beyond identifying relevant values, this also includes determining how these values would be present in what is recommended by a system, examining possible conflicts between different values, and justifying how certain values in specific cases should be prioritised over others <ref type="bibr" target="#b7">[8]</ref>.</p><p>Last year saw the first edition of our workshop. We organized an interactive session in which attendees were encouraged to come up with their own normative framework for a specific use case. Besides that, we also welcomed our first research contribution and proceedings <ref type="bibr" target="#b8">[9]</ref>, publishing nine research papers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Overview of Contributions</head><p>This year's workshop continued last year's work by again welcoming original research contributions. These are included in the workshop's proceedings, describing new research on the design and evaluation of normative recommenders. In total, we received 9 paper submission, of which 6 were accepted for the proceedings.</p><p>The NORMalize2024 program consisted of two blocks with three research presentations each, and a few interactive parts inbetween. The first block was a session on 'Data and Framework', featuring the following presentations: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Disagreenotes</head><p>This year's program featured short, provocative statements by members of the organizing committee. We named these Disagreenotes, as we expected a part of the attendees to disagree with our viewpoints, even though they might be held by some members of the RecSys community.</p><p>The goal was to foster discussion among the attendees on propositions relevant to the workshop. The workshop organizers ensured that they could convincingly argue both positions on the statements, to facilitate a discussion in case the audience all shared the same viewpoints. This had as an added benefit that it created a safe space, where perspectives were not taken personally. All four disagreenotes sparked lively discussion among the participants of the workshop. We wish to thank the participants for their active participation in these insightful discussions. Below, we summarize the presented disagreenotes, as well as the main insights raised in the subsequent discussions.</p><p>Disagreenote 1: We do not need personalized recommender systems. The first Disagreenote triggered a lot of interaction from the audience, and led to a discussion that almost at the philosophical level dissected what it means to be 'personalized' and what it means to 'need' something. For example, it was noted that we do not need personalized recommender systems in the same sense that we need water and food. While personalization can be considered helpful to filter through large amounts of information, other non-personalized alternatives may be possible, and sometimes even preferred. For example, in the context of news, it is important that some parts of the online news platform is and remains curated by editors, as it is important that some news reaches everyone. Yet, at the same time, personalization can be very beneficial to surface news that may otherwise never make it onto the homepage, such as regional news. To summarize, when building recommender systems, we should evaluate what needs or desires they address, and whether these needs and desires may not be better served by a non-personalized system. Disagreenote 2: There is no such thing as unbiased data, therefore, striving for unbiased AI is nonsense While the audience agreed that data is inherently biased and that striving for unbiased data is an unrealistic goal, the second part of the statement prompted discussion. Data collected from the real world reflects human biases, prompting the question of what objectives should guide the development of AI systems. Should the focus be on achieving "unbiased" AI, or is it more pragmatic to prioritize transparency and effective bias mitigation? Transparency regarding how data is collected, whom it represents, and the context of its use can enable practitioners to better interpret and responsibly leverage data, even when it is biased. The discussion also examined the societal risks of ignoring bias, such as reinforcing systemic inequalities, and considered the allocation of responsibility: should developers bear the primary responsibility, or should users and other stakeholders share this burden? This Disagreenote underscored the inherent complexity of striving for fairness and accountability in AI.</p><p>Disagreenote 3: Ethical Guidelines, and non-binding types of policy, are as far as government bodies should go to regulate recommender systems If one of the key points of NORMalize is to discover latent norms and values that we are often not even consciously aware of, then so must we recognize that European laws such as the AI Act and the Digital Service Act embody European norms and values, that are now imposed on the rest of the world. While during the main conference there was often a good deal of muttering about these laws, and specifically GDPR, participants of the NORMalize workshop were (perhaps unsurprisingly) generally in favor of increased regulation. They noted that there is no evidence yet that regulation hinders innovation, but also that laws need to be well-structured and clear in order to be effective. Disagreenote 4: There are too many workshops about roughly the same topic. NOR-Malize should not be organized next year to allow other workshops to gain more critical mass. This disagreenote was meant to entice participants to share thoughts about potential future directions NORMalize could take. RecSys'24 hosted 21 workshops. Out of those, FAccTRec, AltRecSys and RecSoGood were topically strongly related to NORMalize, whereas domain-specific workshops such as INRA, MuRS or HealthRec could have benefited from participants taking a normative perspective. As workshop organizers, we wondered whether we, as one of the smaller workshops, should take a step back, and allow other workshops to gain critical mass, and effectuate change in the conference at large. Participants saw the merit of the point, yet also argued that NORMalize was quite original in its setup, and likely the only workshop that succeeded in bringing interdisciplinary perspectives to the conference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Submitted Work</head><p>The accepted work (9 registered abstracts, 6 accepted) can be thematically clustered into papers dealing with "Data and Frameworks" and "Policy and Values". Each paper received three reviews by members of the program committee, at least one of which was from a technical-and one from a social science/humanities background.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Data and Frameworks</head><p>Publicly available datasets are crucial for addressing challenges in recommender systems, particularly concerning content diversity and user behavior analysis. In their work, "IDEA -Informfully Dataset with Enhanced Attributes", Heitz et al. introduced the IDEA dataset-an open-source collection that combines diverse news articles, detailed user profiles, item recommendations, and rich user-item interactions from a field study on news consumption. This dataset integrates real-time session tracking with self-reported survey data on user satisfaction and knowledge acquisition, providing a valuable resource for designing normative recommender systems.</p><p>Continuing the theme of content diversity, Bekavac et al. presented "From Walls to Windows: Creating Transparency to Understand Filter Bubbles in Social Media". They developed SOAP (System for Observing and Analyzing Posts), a novel system that leverages a multimodal language model to study filter bubbles at scale on large online platforms. SOAP can generate and navigate filter bubbles based on topic prompts, enabling analysis of how topic diversity diminishes over time in social media feeds. Their findings reveal a significant decline in topic diversity within just 60 minutes of scrolling, highlighting the impact of recommender systems on content diversity.</p><p>Further contributing to resources for recommender system evaluation, Malenšek et al. introduced "Generating Diverse Synthetic Datasets for Evaluation of Real-life Recommender Systems". They developed a framework for generating synthetic datasets that are diverse and statistically coherent, tailored to real-world recommender systems. This approach allows for controlled creation of datasets with customizable attributes, such as complex feature interactions and specific distributions, facilitating experiments that require specific experimental setups.</p><p>Their modular and open-source Python package addresses the need for flexible synthetic data generation, aiding in benchmarking algorithms, detecting bias, and advancing recommender system evaluations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Policy and Values</head><p>Policy surrounding recommender systems and their values can take many forms. On the one hand, legislation can help to safeguard against the introduction of harmful norms and values and to set standards. On the other hand, designers and practitioners of relevant systems can define which norms and values should be incorporated into their platforms.</p><p>One example is found in journalism. Møller Hartley and Petrucci show in their work titled Diversifying for Democracy: Cultivating Publics via Algorithmic Design and the Normative Consequences for Journalism how the concept of diversity, which is an often used value in news recommender systems <ref type="bibr" target="#b3">[4]</ref>, is typically rooted in two related concepts: filter bubbles and choice overload. Their literature review suggests that solutions to diversity problems can therefore sought in exposure and viewpoint diversity. One example provided in the paper is that recommending 'more of the same' could not only be boring to users, but also dangerous to democratic processes.</p><p>A different perspective is given by law researchers. Reviglio and Fabbri examine how EU law could affect large platforms, in their work Navigating the Digital Services Act: Scenarios of Transparency and Control in VLOP Recommender Systems. Their work discusses how the Digital Services Act affects various platforms that run recommender system services, particularly those on large platforms. It highlights which parts of the EU legislation contain normative grounds and what the minimal and maximum conditions are for different forms of personalization and the collection of personal data.</p><p>Finally, the work of Atzenhofer-Baumgartner et al. showcases an example of value identification in a digital archive. Their work titled Value Identification in Multi-Stakeholder Recommender Systems for Humanities and Historical Research: The Case of the Digital Archive Monasterium.net shows how various stakeholders and users of this digital archive differ in their main values. For example, editors of this platform value visibility of different content, while researchers would like recommendations to be relevant to them, focusing on accuracy. The work discusses main challenges, for example with regard to conflicting values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Conclusion</head><p>These two blocks show the versatility of the topics concerning normativity and recommender systems. We feel that the scope of this topic is not limited to the contributions we received this year, but that it does provide insights on how norms and values are related to recommender system design. We wholeheartedly invite you to read these proceedings and, if possible, to contribute to a future edition of this workshop.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We would like to thank the participants and authors of accepted contributions for their valuable inputs to the workshop, our program committee for their thoughtful reviews, as well as the RecSys'24 organisers for their support in the organisation of NORMalize. Finally, we would like to thank our employers and funding bodies. Sanne  Vrijenhoek's contribution to this research is supported by the AI, Media and Democracy Lab. Lien Michiels' contribution to this research was supported by the Research Foundation Flanders (FWO) under grant number S006323N and the Flanders AI research program. Johannes Kruse's contribution to this research is supported by the Innovation Foundation Denmark under grant number 1044-00058B and Platform Intelligence in News under project number 0175-00014B. Alain Starke's contribution was in part supported by the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the Centre for Research-based Innovation scheme, project number 309339. Nava Tintarev's contribution is supported by the project ROBUST: Trustworthy AI-based Systems for Sustainable Growth with project number KICH3.LTP.20.006, which is (partly) financed by the Dutch Research Council (NWO), RTL, and the Dutch Ministry of Economic Affairs and Climate Policy (EZK) under the program LTP KIC 2020-2023. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Exploring author gender in book rating and recommendation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Ekstrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R I</forename><surname>Kazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mehrpouyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kluver</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th ACM conference on recommender systems</title>
				<meeting>the 12th ACM conference on recommender systems</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="242" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness &amp; satisfaction in recommendation systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mehrotra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mcinerney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Bouchard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lalmas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Diaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th acm international conference on information and knowledge management</title>
				<meeting>the 27th acm international conference on information and knowledge management</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2243" to="2251" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Do graph neural networks build fair user models? assessing disparate impact and mistreatment in behavioural user profiling</title>
		<author>
			<persName><forename type="first">E</forename><surname>Purificato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Boratto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>De Luca</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management</title>
				<meeting>the 31st ACM International Conference on Information &amp; Knowledge Management</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="4399" to="4403" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Radio -rankaware divergence metrics to measure normative diversity in news recommendations</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vrijenhoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bénédict</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gutierrez Granada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Odijk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Rijke</surname></persName>
		</author>
		<idno type="DOI">10.1145/3523227.3546780</idno>
		<idno>doi:10.1145/3523227.3546780</idno>
		<ptr target="https://doi.org/10.1145/3523227.3546780" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th ACM Conference on Recommender Systems, RecSys &apos;22</title>
				<meeting>the 16th ACM Conference on Recommender Systems, RecSys &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="208" to="219" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Normative theory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Buckler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and methods in political science</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="156" to="180" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Normativity</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Thomson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Normative and empirical research methods: Their usefulness and relevance in the study of law as an object</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">A</forename><surname>Christiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia-Social and Behavioral Sciences</title>
		<imprint>
			<biblScope unit="volume">219</biblScope>
			<biblScope unit="page" from="201" to="207" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Morality, ethics, and reflection: a categorization of normative is research</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Stahl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the association for information systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Report on normalize: The first workshop on the normative design and evaluation of recommender systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vrijenhoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Michiels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kruse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Starke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">V</forename><surname>Guerrero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First Workshop on the Normative Design and Evaluation of Recommender Systems (NORMalize 2023), co-located with the 17th ACM Conference on Recommender Systems (RecSys 2023)</title>
				<meeting>the First Workshop on the Normative Design and Evaluation of Recommender Systems (NORMalize 2023), co-located with the 17th ACM Conference on Recommender Systems (RecSys 2023)</meeting>
		<imprint>
			<publisher>CEUR</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3639</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
