<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Grounded Ethical AI: A Demonstrative Approach with RAG-Enhanced Agents</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">José</forename><forename type="middle">Antonio</forename><surname>Siqueira De Cerqueira</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ayman</forename><forename type="middle">Asad</forename><surname>Khan</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rebekah</forename><surname>Rousi</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Vaasa (UWASA)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nannan</forename><surname>Xi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Juho</forename><surname>Hamari</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kai-Kristian</forename><surname>Kemell</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pekka</forename><surname>Abrahamsson</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Tampere University (TAU)</orgName>
								<address>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Grounded Ethical AI: A Demonstrative Approach with RAG-Enhanced Agents</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">40FE752CD07CA0DBE12689161777810C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:09+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>AI ethics</term>
					<term>Large Language Models</term>
					<term>Trustworthiness</term>
					<term>AI4SE</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Large Language Models (LLMs) have become central in various fields, yet their trustworthiness remains a pressing concern, especially in developing ethically aligned AI-based systems. This paper presents a demonstration of an LLM-based multi-agent system incorporating Retrieval-Augmented Generation (RAG) to support developers in creating AI systems that align with legal and ethical guidelines. Leveraging documents like the EU AI Act, AI HLEG guidelines, and ISO/IEC 42001:2024, the prototype utilizes multiple agents with specialized roles, structured conversations, and debate rounds to enhance both ethical rigor and trustworthiness. Initial evaluations on real-world AI incidents reveal that this system can produce AI solutions adhering to specific ethical requirements, though further refinements are needed for citation accuracy and practical application. This demonstration illustrates the potential of RAG-enhanced LLMs to operationalize AI ethics and regulatory compliance within the development process, highlighting future directions for achieving more reliable and ethically robust AI solutions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Artificial Intelligence (AI) systems, particularly Large Language Models (LLMs), have become indispensable tools across a wide range of applications. However, trustworthiness in LLMs remains a significant concern <ref type="bibr" target="#b0">[1]</ref>, increased by the probabilistic nature of LLMs and the huge amount of data they are trained on <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. Issues such as bias, misinformation, and hallucinations in LLM outputs pose risks when these models are employed in real world scenarios, such as software engineering (LLM4SE) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>. In relation to AI, diverse stakeholders have produced ethical guidelines and principles to guide the development of ethically aligned AI-based systems, but these efforts remain too abstract and high level. In this sense, practitioners face several challenges when trying to operationalise AI ethical principles during the software development life cycle <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>. The European Union is moving forward the EU AI Act, serving as a regulatory standards that companies will have to adhere to <ref type="bibr" target="#b8">[9]</ref>. Therefore, applying LLM4SE in the context of the development of ethically aligned AI-based systems is an interesting topic of research that this study approaches. To the best of our knowledge, there are no existing studies in the literature that undertake a similar approach.</p><p>Several techniques found in the literature serve to improve trustworthiness in LLM. They are used to implement a prototype that is an LLM-based multi-agent system with Retrieval Augmented Generation (RAG). Some of the techniques present are structured conversations <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>, agents with specialized roles <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b10">11]</ref>, multiple rounds of debate <ref type="bibr" target="#b10">[11]</ref>, providing human interaction <ref type="bibr" target="#b10">[11]</ref> and the use of RAG, grounding the knowledge of the agents <ref type="bibr" target="#b5">[6]</ref>. jose.siqueiradecerqueira@tuni.fi (J. A. S. d. Cerqueira); ayman.khan@tuni.fi (A. A. Khan); rebekah.rousi@uwasa.fi (R. Rousi); nannan.xi@tuni.fi (N. Xi); juho.hamari@tuni.fi (J. Hamari); kai-kristian.kemell@tuni.fi (K. Kemell); pekka.abrahamsson@tuni.fi (P. Abrahamsson) 0000-0002-8143-1042 (J. A. S. d. Cerqueira); 0009-0004-0134-8313 (A. A. Khan); 0000-0001-5771-3528 (R. Rousi); 0000-0002-9424-8116 (N. Xi); 0000-0002-6573-588X (J. Hamari); 0000-0002-0225-4560 (K. Kemell); 0000-0002-4360-2226 (P. Abrahamsson) This paper presents a demonstration of a prototype LLM-based multi-agent system with RAG, designed to mitigate these challenges. The system incorporates Retrieval-Augmented Generation to enhance the trustworthiness of the generated AI-based systems <ref type="bibr" target="#b5">[6]</ref>. By referencing external ethical guidelines and standards such as the EU AI Act <ref type="bibr" target="#b8">[9]</ref>, AI HLEG <ref type="bibr" target="#b12">[13]</ref>, and ISO/IEC 42001:2024 <ref type="bibr" target="#b13">[14]</ref> documents, the prototype can support developers in the task of developing AI-based systems that align with ethical and legal requirements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">LLM-based Multi-Agent System with RAG</head><p>The development and evaluation of the prototype follow the Design Science Research (DSR) method <ref type="bibr" target="#b14">[15]</ref>. This process begins with an exploration phase, where we establish research motivation, identify existing gaps, and examine relevant literature for techniques to improve trustworthiness in LLMs. Next, we build a prototype informed by the insights from the exploration stage. The final evaluation phase involves assessing the prototype's performance and analyzing the outcomes, leading to iterative refinements. Currently, the prototype is in its second iteration, where we have incorporated feedback and findings from the initial version to improve functionality and address previously identified limitations.</p><p>This prototype is called LLM-based multi-agent system with RAG, building on our last study <ref type="bibr" target="#b15">[16]</ref>. It is developed taking into consideration the techniques to improve trustworthiness in LLM discussed: multiple agents with specialised roles, multiple rounds of debate, structured conversation <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b5">6]</ref>. Moreover, the biggest difference with the first prototype is the inclusion of RAG and an user interface. Retrieval LLMs can significantly outperform standard LLMs without retrieval capabilities <ref type="bibr" target="#b5">[6]</ref>. The prototype can ground the source code generated with the legal documents provided.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evaluation</head><p>To evaluate the prototype, we performed tests using real-world AI incident cases from AI Incident Database. The incidents were represented as a project description and processed by the LLM-based multi-agent system with RAG to produce source code and ethical assessments grounded in regulatory documents like the EU AI Act, AI HLEG, and ISO/IEC 42001:2024. Through RAG, agents were able to retrieve and apply specific legal standards, referencing sections directly relevant to each AI incident, which helped ensure compliance and alignment with ethical requirements. There are three agents, two senior Python developers, and one AI ethics specialist.</p><p>A notable use case involved an AI recruitment tool project with a focus on bias mitigation, visible in Figures <ref type="figure" target="#fig_2">1 and 2</ref>. The project description provided is: Develop an AI-powered recruitment tool designed to screen resumes impartially, complying with the EU AI Act. The project aims to eliminate biases related to gender and language, improving fair evaluation of all applicants. The AI Ethics Specialist will guide the team in addressing ethical concerns and risk levels. The senior Python developers will utilize NLP to process resumes, referencing relevant EU AI Act guidelines.</p><p>In this instance, the system's retrieval mechanism identified applicable sections of the EU AI Act, improving fairness and transparency while guiding ethical decision-making. However, initial evaluations revealed issues: some generated code segments lacked precise citation details, and certain aspects were flagged as high-risk under the EU AI Act. Iterative refinements reduced these issues in the second version, achieving more accurate document references and greater alignment with ethical standards. This approach demonstrates the prototype's potential in developing AI solutions that are ethically grounded and contextually informed by legal frameworks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Final Remarks and Discussion</head><p>This demonstration of a multi-agent LLM system enhanced by RAG highlights the potential for using trustworthy LLM-based tools in the development of ethically aligned AI systems. Our findings suggest that retrieval-augmented LLMs offer distinct advantages, improving both the trustworthiness and  specificity of the generated outputs when drawing from external ethical documents. By referencing these documents, the system helps practitioners create AI solutions that meet essential ethical and legal guidelines from the earliest stages.</p><p>While this prototype advances the operationalization of AI ethics, future iterations will focus on addressing remaining challenges such as further improving citation accuracy and enhancing the practical usability for developers in industry settings. Our ongoing research will involve more extensive testing scenarios and practitioner feedback, aiming to refine the tool's ability to balance ethical rigor with developer convenience. Additionally, we plan to open-source the prototype, contributing to the broader AI and software engineering community. This approach will enable further refinement and validation, bringing ethically aligned AI system development within reach for a wider audience.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>The 15th International Conference on Software Business (ICSOB 2024), November 18-20, 2024, Utrecht, The Netherlands * Corresponding author.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Screenshot of the Multi-agent System UI. The UI shows agents processing a prompt to develop an AI-powered recruitment tool, complying with EU AI Act.</figDesc><graphic coords="3,72.00,65.61,451.27,220.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Screenshot of the Multi-agent System UI</figDesc><graphic coords="3,72.00,331.01,451.27,222.01" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research was supported by Jane and Aatos Erkko Foundation through CONVERGENCE of Humans and Machines Project under grant No. 220025.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>During the preparation of this work, the authors utilized ChatGPT to assist in identifying and correcting writing errors, and enhancing clarity and conciseness. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bommasani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Soylu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yasunaga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Narayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2211.09110</idno>
		<title level="m">Holistic evaluation of language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Conversational ai for multi-agent communication in natural language: Research directions at the interaction lab</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lemon</surname></persName>
		</author>
		<idno type="DOI">10.3233/aic-220147</idno>
	</analytic>
	<monogr>
		<title level="j">AI Communications</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="295" to="308" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Improving factuality and reasoning in language models through multiagent debate</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">B</forename><surname>Tenenbaum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Mordatch</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2305.14325</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xiong</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2309.15025</idno>
		<title level="m">Large language model alignment: A survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-F</forename><surname>Ton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G H</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Klochkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Taufiq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2308.05374</idno>
		<title level="m">Trustworthy LLMs: A survey and guideline for evaluating large language models&apos; alignment</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Lyu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2401.05561</idno>
		<title level="m">TrustLLM: Trustworthiness in large language models</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Guide for artificial intelligence ethical requirements elicitation -RE4AI ethical guide</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A S</forename><surname>De Cerqueira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P D</forename><surname>Azevedo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A T</forename><surname>Leão</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">D</forename><surname>Canedo</surname></persName>
		</author>
		<ptr target="http://hdl.handle.net/10125/80015" />
	</analytic>
	<monogr>
		<title level="m">55th Hawaii International Conference on System Sciences, HICSS 2022, Virtual Event / Maui</title>
				<meeting><address><addrLine>, Hawaii, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ScholarSpace</publisher>
			<date type="published" when="2022">January 4-7, 2022. 2022</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">ECCOLA -A method for implementing ethically aligned AI systems</title>
		<author>
			<persName><forename type="first">V</forename><surname>Vakkuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kemell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jantunen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Halme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abrahamsson</surname></persName>
		</author>
		<idno type="DOI">10.1016/J.JSS.2021.111067</idno>
	</analytic>
	<monogr>
		<title level="j">J. Syst. Softw</title>
		<imprint>
			<biblScope unit="volume">182</biblScope>
			<biblScope unit="page">111067</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Commission</surname></persName>
		</author>
		<ptr target="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" />
		<title level="m">EU AI Act: First regulation on artificial intelligence</title>
				<imprint>
			<date type="published" when="2023-04-01">2023. 01 Apr 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K S</forename><surname>Yau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<idno>ArXiv abs/2308.00352</idno>
		<title level="m">MetaGPT: Meta programming for multi-agent collaborative framework</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">Q</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2308.08155</idno>
		<title level="m">AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Qian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Cong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.07924</idno>
		<title level="m">Communicative agents for software development</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">High-Level Expert Group on Artificial Intelligence</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">C</forename></persName>
		</author>
		<ptr target="https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai" />
	</analytic>
	<monogr>
		<title level="m">Ethics guidelines for trustworthy AI</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<idno>ISO/IEC 42001:2024</idno>
		<ptr target="https://www.iso.org/standard/82827.html" />
		<title level="m">-Information Technology -Artificial Intelligence -Management System for Trustworthiness</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
		<respStmt>
			<orgName>Standardization and International Electrotechnical Commission</orgName>
		</respStmt>
	</monogr>
	<note>standard published by the International Organization for</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Design science in information systems research</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Hevner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>March</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MIS Q</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="75" to="105" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A S</forename><surname>De Cerqueira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Agbese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rousi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Xi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hamari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abrahamsson</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2411.08881</idno>
		<title level="m">Can we trust AI agents? An experimental study towards trustworthy LLM-based multi-agent systems for AI ethics</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
