<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Report on NORMalize: The First Workshop on the Normative Design and Evaluation of Recommender Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sanne</forename><surname>Vrijenhoek</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lien</forename><surname>Michiels</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Antwerp</orgName>
								<address>
									<settlement>Antwerp</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Johannes</forename><surname>Kruse</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Ekstra Bladet</orgName>
								<address>
									<settlement>Copenhagen</settlement>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alain</forename><surname>Starke</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="institution">University of Bergen</orgName>
								<address>
									<settlement>Bergen</settlement>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jordi</forename><forename type="middle">Viader</forename><surname>Guerrero</surname></persName>
							<affiliation key="aff4">
								<orgName type="institution">TU Delft</orgName>
								<address>
									<settlement>Delft</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nava</forename><surname>Tintarev</surname></persName>
							<affiliation key="aff5">
								<orgName type="institution">Maastricht University</orgName>
								<address>
									<settlement>Maastricht</settlement>
									<country key="NL">the Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Report on NORMalize: The First Workshop on the Normative Design and Evaluation of Recommender Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7E53147B0817B677159D5CF09A9BA1DB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:44+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>normative thinking</term>
					<term>normative design</term>
					<term>recommender systems</term>
					<term>norms</term>
					<term>values</term>
					<term>value-sensitive design</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recommender systems are among the most widely used applications of artificial intelligence. Because of their widespread use, it is important that practitioners and researchers think about the impact they may have on users, society, and other stakeholders. To that effect, the NORMalize workshop seeks to introduce normative thinking, to consider the norms and values that underpin recommender systems in the recommender systems community. The objective of NORMalize is to bring together a growing community of researchers and practitioners across disciplines who want to think about the norms and values that should be considered in the design and evaluation of recommender systems, and further educate them on how to reflect on, prioritise, and operationalise such norms and values. This document is a report on the first workshop, co-located with ACM RecSys '23 in Singapore.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Users and developers of recommender systems are becoming increasingly aware of the possible societal impact of their systems <ref type="bibr" target="#b0">[1]</ref>. As 'beyond-accuracy' metrics are becoming more common in recommender research, much attention has been given to methods related to notions of fairness, such as statistical parity or equality of opportunity in the design or evaluation of recommender systems <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. However, many values could be considered in the development NORMalize 2023: The First Workshop on the Normative Design and Evaluation of Recommender Systems, September 19, 2023, co-located with the ACM Conference on Recommender Systems 2023 (RecSys 2023), Singapore Envelope s.vrijenhoek@uva.nl (S. Vrijenhoek); lien.michiels@uantwerpen.be (L. Michiels); johannes.kruse@eb.dk (J. Kruse); a.d.starke@uva.nl (A. Starke); j.viaderguerrero@tudelft.nl (J. V. Guerrero); n.tintarev@maastrichtuniversity.nl (N. Tintarev) Orcid 0000-0002-1031-4746 (S. Vrijenhoek); 0000-0003-0152-2460 (L. Michiels); 0009-0007-5830-0611 (J. Kruse); 0000-0002-9873-8016 (A. Starke); 0009-0003-8556-7670 (J. V. Guerrero); 0000-0003-1663-1627 (N. Tintarev) and goal of a recommender systems, of which fairness towards the end-users of the system is but one example <ref type="bibr" target="#b3">[4]</ref>.</p><p>Identifying and balancing these values requires so-called normative thinking and decisionmaking <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. Normative thinking requires us to reflect on how or what the system should be, rather than focusing on what the current state of the system (output) is. Besides identifying relevant values, this includes determining how these values would be expressed in what is recommended by a system, how different values may be conflicting, and justifying how certain values in such cases should be prioritised over others <ref type="bibr" target="#b7">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Interactive Session</head><p>In the on-site morning session, participants were first introduced to the principles and practices of normative thinking. After this short lecture, participants were split into breakout groups. In these groups, they discussed a specific use case of a recommender system. There were three groups: X (formerly known as Twitter), BBC News, and Spotify. First, participants were asked to identify when, where, and how the system would used and what it recommended. For example, the Spotify recommender system(s) need to recommend songs, albums and artists, but also podcasts and playlists, and so on. Then, they identified relevant stakeholders and the norms and values that mattered to them. Again using the Spotify example, the participants identified many different stakeholders: from advertisers, to creators, end-users to investors and more. They then catalogued these stakeholders' values, for example, discoverability matters greatly to creators and indie labels, whereas profit matters most to investors. Next, they considered the relationships between values and their possible (negative) consequences. Using the example of X, we might ask if we value freedom of speech, could that lead to hate speech and misinformation? Subsequently, each group was allocated a total of one hundred points to be divided among various values. Each group member was given the responsibility to represent one or more stakeholders of the recommender system and to champion their respective values.</p><p>Each group was given a starting kit to work with: Marked envelopes with instructions for each step of the process, sticky notes, sharpies, and pens. Groups were free to come up with their own creative process. Each group approached the task differently: Whereas the Spotify group immediately created a mind map on the wall, the Twitter group only made notes on paper. The group work concluded with a discussion of what a recommender system that prioritizes values and stakeholders in such a way would look like. Finally, each group presented the outcomes of their discussion to all workshop participants and organizers. Interestingly, we found that outcomes differed greatly as well: Every use case had different amounts of stakeholders and values, and as a result, took a different amount of time to complete every step.</p><p>After the session concluded, participants were asked to complete a short survey to gauge their satisfaction with the process, as well as provide oral feedback. Generally speaking, the interactive session was well received by the participants. Those who completed the survey unanimously found that instructions were clear, and indicated that they had learning something during the session. The most well-liked parts of the interactive session were those that required discussion: Assigning values to stakeholders, discussing consequences and prioritizing values as a group. We also asked participants how we could further improve the interactive session.</p><p>One participant mentioned that by using real companies as use cases, people were forced to reason as if they were a part of these companies, which limited their creativity somewhat. They suggested to use fictional company descriptions in the future instead.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Keynote</head><p>The subsequent keynote addressed the norms and governance of recommender systems on digital platforms like YouTube and TikTok, especially in relation to user-generated content. It addressed concerns about the platforms' algorithmic systems contributing to user harm and challenged the notion of platforms as mere content conduits. There were three main points: First, the need to question the established definitions of recommendation-related harms and to encourage diverse frameworks for evaluating these systems. Second, the importance of considering the long-term effects of information landscape commercialization and the potential of algorithmic recommendation for elevating historically excluded voices. Lastly, the keynote called for greater appreciation of the nature of the 'items' being recommended, which opens up possibilities for more sophisticated discussions on normative frameworks for curation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Submitted Work</head><p>The accepted work (13 registered abstracts, 9 accepted) can be thematically clustered into papers dealing with "Power Structures", "News Recommendation" and "Practical Applications". Each paper received three reviews by members of the program committee, at least one of which was from a technical-and one from a social science/humanities background.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Power Structures</head><p>Recommender systems often exist in a complex environment, with multiple stakeholders competing for optimization of their own objectives. In "Towards a Pragmatic Approach for studying Normative Recommender Systems: exploring Power Dynamics in Digital Platform Markets", Binst et al. argue that decision power exists primarily on the side of the system providers. They illustrate this with the key bottlenecks "lock-in and monopolization" and "engagement-centric logic", and make concrete suggestions for regulatory principles that may alleviate them.</p><p>"Designing and Implementing Socially Beneficial Recommender Systems: An Interdisciplinary Approach" by Mallia presents a theoretical argument on how we can move from engagementcentric recommender systems to recommender systems that have a positive impact on society. Central to this discussion is the definition of what a 'positive social outcome' actually is, as this is dependent on a multitude of socio-cultural factors. Furthermore, their actual implementation requires interdisciplinary methodologies and collaboration.</p><p>The design of recommender systems is often rooted in a utilitarian or consequentialist world view. "Digital Humanism and Norms in Recommender Systems" by Prem et al. details how Digital Humanism can serve as a useful lens to approach complex issues surrounding the design of recommender systems, and as such promote values such as human rights, democracy, inclusion and diversity. For example, from the Digital Humanism perspective users should be empowered to understand the system, and as a consequence will make better choices regarding their interactions with it.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">News Recommendation</head><p>Publicly available datasets are crucial for tackling challenges faced by news recommender systems, especially in terms of news diversity. Fortunately, Lucas et al. introduced the News Portal Recommendations (NPR) dataset in their work, "NPR: A News Portal Recommendations Dataset". Distinct from the Microsoft News Dataset (MIND) <ref type="bibr" target="#b8">[9]</ref>, the NPR dataset focuses on frequent user interactions with hard news. Furthermore, to assess diversity metrics, Lucas et al. enriched the dataset with the metadata needed to employ the RADio framework <ref type="bibr" target="#b3">[4]</ref> on the NPR dataset.</p><p>Building on the theme of enhancing news recommendations, another noteworthy study delves deeper into a specific challenge. In 'Improving and Evaluating the Detection of Fragmentation in News Recommendations with the Clustering of News Story Chains', Polimeno et al. focus on quantifying fragmentation in news recommendations. Specifically, they examine how to accurately measure the fragmentation of information streams in news recommendations. To do this, they employ Natural Language Processing (NLP) to identify distinct news events, stories, or timelines. Their work features a thorough investigation of different approaches, such as hierarchical clustering coupled with SentenceBERT text representation, along with the analysis of simulated scenarios. These results could provide valuable insights for stakeholders concerning the measurement and interpretation of fragmentation.</p><p>Going beyond data and fragmentation, there is also an emerging emphasis on enhancing user experience in news recommendations. Kiddle et al. formulate a novel user-centric approach for promoting serendipity in news recommender systems. This approach leverages user familiarity with the algorithmic language of recent social media, particularly TikTok, to nurture news discovery. They introduce the concept of 'navigable surprise', which they define as the experience of encountering novel, diverse, relevant, and unexpected information under conditions of immediate (i.e., real-time) and bounded (i.e., item-oriented) agency. To realize 'navigable surprise', they propose a combination of short-term interest modeling with consumption-based (implicit) user signaling. As such, they highlight the centrality of short-term interest modeling to serendipity in recommender design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Practical Applications</head><p>In "Value-Based Nudging in News Recommender Systems -Results From an Experimental User Study", Modre et al. explore the potential of nudges for changing people's news reading behavior. They evaluate two types of nudges: feedback-based and social norms-based, both grounded in theory from psychology and related social sciences, and find social norms-based nudges achieve the best results. Their study provides a great example of how interdisciplinary work that bridges the social and computer sciences can help develop more effective, socially responsible recommender systems.</p><p>"Refining Deliberative Standards for Online Political Communication: Introducing a Summative Approach to Designing Deliberative Recommender Systems" by Stolwijk et al. formulate design guidelines, rooted in political theory, for recommender systems that wish to foster deliberative democracy. By proposing a set of concrete metrics and objectives that can be used to design and evaluate deliberative recommender systems, they contribute to the operationalization of normative goals that were previously overlooked.</p><p>In "Classification of Normative Recommender Systems", Heitz proposes a classification of recommender systems into four types related to how and when normative goals are introduced to the recommender system; for example, in the preprocessing stage or as a postprocessing step. He argues that different types are not directly comparable and will lead to different results. As such, his classification contributes to a more 'mature' debate on normative goals in recommender systems.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We would like to thank the participants and authors of accepted contributions for their valuable inputs to the workshop, our program committee for their thoughtful reviews, as well as the RecSys'23 organisers for their support in the organisation of NORMalize. Finally, we would like to thank our employers and funding bodies. Sanne Vrijenhoek's contribution to this research is supported by the AI, Media and Democracy Lab. Lien Michiels' contribution to this research was supported by the Research Foundation Flanders (FWO) under grant number S006323N and the Flanders AI research program. Johannes Kruse's contribution to this research is supported by the Innovation Foundation Denmark under grant number 1044-00058B and Platform Intelligence in News under project number 0175-00014B. Alain Starke's contribution was in part supported by the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the Centre for Research-based Innovation scheme, project number 309339. Jordi Viader Guerrero's contribution is supported by the TU Delft AI Labs Initiative and the AI DeMoS Lab. Nava Tintarev's contribution is supported by the project ROBUST: Trustworthy AI-based Systems for Sustainable Growth with project number KICH3.LTP.20.006, which is (partly) financed by the Dutch Research Council (NWO), RTL, and the Dutch Ministry of Economic Affairs and Climate Policy (EZK) under the program LTP KIC 2020-2023. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Exploring author gender in book rating and recommendation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Ekstrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R I</forename><surname>Kazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mehrpouyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kluver</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th ACM conference on recommender systems</title>
				<meeting>the 12th ACM conference on recommender systems</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="242" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness &amp; satisfaction in recommendation systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mehrotra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mcinerney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Bouchard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lalmas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Diaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th acm international conference on information and knowledge management</title>
				<meeting>the 27th acm international conference on information and knowledge management</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2243" to="2251" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Do graph neural networks build fair user models? assessing disparate impact and mistreatment in behavioural user profiling</title>
		<author>
			<persName><forename type="first">E</forename><surname>Purificato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Boratto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>De Luca</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management</title>
				<meeting>the 31st ACM International Conference on Information &amp; Knowledge Management</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="4399" to="4403" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Radio -rankaware divergence metrics to measure normative diversity in news recommendations</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vrijenhoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bénédict</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gutierrez Granada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Odijk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Rijke</surname></persName>
		</author>
		<idno type="DOI">10.1145/3523227.3546780</idno>
		<idno>doi:10.1145/3523227.3546780</idno>
		<ptr target="https://doi.org/10.1145/3523227.3546780" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th ACM Conference on Recommender Systems, RecSys &apos;22</title>
				<meeting>the 16th ACM Conference on Recommender Systems, RecSys &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="208" to="219" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Normative theory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Buckler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and methods in political science</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="156" to="180" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Normativity</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Thomson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Normative and empirical research methods: Their usefulness and relevance in the study of law as an object</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">A</forename><surname>Christiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia-Social and Behavioral Sciences</title>
		<imprint>
			<biblScope unit="volume">219</biblScope>
			<biblScope unit="page" from="201" to="207" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Morality, ethics, and reflection: a categorization of normative is research</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Stahl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the association for information systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">MIND: A large-scale dataset for news recommendation</title>
		<author>
			<persName><forename type="first">F</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhou</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.acl-main.331</idno>
		<ptr target="https://aclanthology.org/2020.acl-main.331.doi:10.18653/v1/2020.acl-main.331" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</title>
				<meeting>the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="3597" to="3606" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
