<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Grounded Ethical AI: A Demonstrative Approach with RAG-Enhanced Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>José Antonio Siqueira de Cerqueira</string-name>
          <email>jose.siqueiradecerqueira@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ayman Asad Khan</string-name>
          <email>ayman.khan@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rebekah Rousi</string-name>
          <email>rebekah.rousi@uwasa.fi</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nannan Xi</string-name>
          <email>nannan.xi@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juho Hamari</string-name>
          <email>juho.hamari@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kai-Kristian Kemell</string-name>
          <email>kai-kristian.kemell@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pekka Abrahamsson</string-name>
          <email>pekka.abrahamsson@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Tampere University (TAU)</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Vaasa (UWASA)</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Large Language Models (LLMs) have become central in various fields, yet their trustworthiness remains a pressing concern, especially in developing ethically aligned AI-based systems. This paper presents a demonstration of an LLM-based multi-agent system incorporating Retrieval-Augmented Generation (RAG) to support developers in creating AI systems that align with legal and ethical guidelines. Leveraging documents like the EU AI Act, AI HLEG guidelines, and ISO/IEC 42001:2024, the prototype utilizes multiple agents with specialized roles, structured conversations, and debate rounds to enhance both ethical rigor and trustworthiness. Initial evaluations on real-world AI incidents reveal that this system can produce AI solutions adhering to specific ethical requirements, though further refinements are needed for citation accuracy and practical application. This demonstration illustrates the potential of RAG-enhanced LLMs to operationalize AI ethics and regulatory compliance within the development process, highlighting future directions for achieving more reliable and ethically robust AI solutions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI ethics</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>Trustworthiness</kwd>
        <kwd>AI4SE</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial Intelligence (AI) systems, particularly Large Language Models (LLMs), have become
indispensable tools across a wide range of applications. However, trustworthiness in LLMs remains a significant
concern [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], increased by the probabilistic nature of LLMs and the huge amount of data they are trained
on [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Issues such as bias, misinformation, and hallucinations in LLM outputs pose risks when
these models are employed in real world scenarios, such as software engineering (LLM4SE) [
        <xref ref-type="bibr" rid="ref1 ref4 ref5 ref6">1, 4, 5, 6</xref>
        ].
In relation to AI, diverse stakeholders have produced ethical guidelines and principles to guide the
development of ethically aligned AI-based systems, but these eforts remain too abstract and high level.
In this sense, practitioners face several challenges when trying to operationalise AI ethical principles
during the software development life cycle [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. The European Union is moving forward the EU AI
Act, serving as a regulatory standards that companies will have to adhere to [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Therefore, applying
LLM4SE in the context of the development of ethically aligned AI-based systems is an interesting topic
of research that this study approaches. To the best of our knowledge, there are no existing studies in
the literature that undertake a similar approach.
      </p>
      <p>
        Several techniques found in the literature serve to improve trustworthiness in LLM. They are used to
implement a prototype that is an LLM-based multi-agent system with Retrieval Augmented Generation
(RAG). Some of the techniques present are structured conversations [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], agents with specialized
roles [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 12, 11</xref>
        ], multiple rounds of debate [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], providing human interaction [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and the use of RAG,
grounding the knowledge of the agents [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        This paper presents a demonstration of a prototype LLM-based multi-agent system with RAG,
designed to mitigate these challenges. The system incorporates Retrieval-Augmented Generation to
enhance the trustworthiness of the generated AI-based systems [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. By referencing external ethical
guidelines and standards such as the EU AI Act [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], AI HLEG [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], and ISO/IEC 42001:2024 [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
documents, the prototype can support developers in the task of developing AI-based systems that align
with ethical and legal requirements.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. LLM-based Multi-Agent System with RAG</title>
      <p>
        The development and evaluation of the prototype follow the Design Science Research (DSR) method [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
This process begins with an exploration phase, where we establish research motivation, identify existing
gaps, and examine relevant literature for techniques to improve trustworthiness in LLMs. Next, we build
a prototype informed by the insights from the exploration stage. The final evaluation phase involves
assessing the prototype’s performance and analyzing the outcomes, leading to iterative refinements.
Currently, the prototype is in its second iteration, where we have incorporated feedback and findings
from the initial version to improve functionality and address previously identified limitations.
      </p>
      <p>
        This prototype is called LLM-based multi-agent system with RAG, building on our last study [16]. It
is developed taking into consideration the techniques to improve trustworthiness in LLM discussed:
multiple agents with specialised roles, multiple rounds of debate, structured conversation [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref6">10, 12, 11, 6</xref>
        ].
Moreover, the biggest diference with the first prototype is the inclusion of RAG and an user interface.
Retrieval LLMs can significantly outperform standard LLMs without retrieval capabilities[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The
prototype can ground the source code generated with the legal documents provided.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Evaluation</title>
      <p>To evaluate the prototype, we performed tests using real-world AI incident cases from AI Incident
Database. The incidents were represented as a project description and processed by the LLM-based
multi-agent system with RAG to produce source code and ethical assessments grounded in regulatory
documents like the EU AI Act, AI HLEG, and ISO/IEC 42001:2024. Through RAG, agents were able to
retrieve and apply specific legal standards, referencing sections directly relevant to each AI incident,
which helped ensure compliance and alignment with ethical requirements. There are three agents, two
senior Python developers, and one AI ethics specialist.</p>
      <p>A notable use case involved an AI recruitment tool project with a focus on bias mitigation, visible in
Figures 1 and 2. The project description provided is: Develop an AI-powered recruitment tool designed to
screen resumes impartially, complying with the EU AI Act. The project aims to eliminate biases related to
gender and language, improving fair evaluation of all applicants. The AI Ethics Specialist will guide the
team in addressing ethical concerns and risk levels. The senior Python developers will utilize NLP to process
resumes, referencing relevant EU AI Act guidelines.</p>
      <p>In this instance, the system’s retrieval mechanism identified applicable sections of the EU AI Act,
improving fairness and transparency while guiding ethical decision-making. However, initial evaluations
revealed issues: some generated code segments lacked precise citation details, and certain aspects were
lfagged as high-risk under the EU AI Act. Iterative refinements reduced these issues in the second
version, achieving more accurate document references and greater alignment with ethical standards.
This approach demonstrates the prototype’s potential in developing AI solutions that are ethically
grounded and contextually informed by legal frameworks.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Final Remarks and Discussion</title>
      <p>This demonstration of a multi-agent LLM system enhanced by RAG highlights the potential for using
trustworthy LLM-based tools in the development of ethically aligned AI systems. Our findings suggest
that retrieval-augmented LLMs ofer distinct advantages, improving both the trustworthiness and
specificity of the generated outputs when drawing from external ethical documents. By referencing
these documents, the system helps practitioners create AI solutions that meet essential ethical and legal
guidelines from the earliest stages.</p>
      <p>While this prototype advances the operationalization of AI ethics, future iterations will focus on
addressing remaining challenges such as further improving citation accuracy and enhancing the practical
usability for developers in industry settings. Our ongoing research will involve more extensive testing
scenarios and practitioner feedback, aiming to refine the tool’s ability to balance ethical rigor with
developer convenience. Additionally, we plan to open-source the prototype, contributing to the broader
AI and software engineering community. This approach will enable further refinement and validation,
bringing ethically aligned AI system development within reach for a wider audience.
This research was supported by Jane and Aatos Erkko Foundation through CONVERGENCE of Humans
and Machines Project under grant No. 220025.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors utilized ChatGPT to assist in identifying and correcting
writing errors, and enhancing clarity and conciseness. After using this tool, the authors reviewed and
edited the content as needed and take full responsibility for the content of the published article.
[16] J. A. S. de Cerqueira, M. Agbese, R. Rousi, N. Xi, J. Hamari, P. Abrahamsson, Can we trust AI
agents? An experimental study towards trustworthy LLM-based multi-agent systems for AI ethics,
arXiv preprint arXiv:2411.08881 (2024).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tsipras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Soylu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yasunaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , et al.,
          <article-title>Holistic evaluation of language models</article-title>
          ,
          <source>arXiv preprint arXiv:2211.09110</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lemon</surname>
          </string-name>
          ,
          <article-title>Conversational ai for multi-agent communication in natural language: Research directions at the interaction lab</article-title>
          ,
          <source>AI</source>
          Communications
          <volume>35</volume>
          (
          <year>2022</year>
          )
          <fpage>295</fpage>
          -
          <lpage>308</lpage>
          . doi:
          <volume>10</volume>
          .3233/aic-220147.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Tenenbaum</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Mordatch</surname>
          </string-name>
          ,
          <article-title>Improving factuality and reasoning in language models through multiagent debate</article-title>
          ,
          <source>arXiv preprint arXiv:2305.14325</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <article-title>Large language model alignment: A survey</article-title>
          ,
          <source>arXiv preprint arXiv:2309.15025</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-F.</given-names>
            <surname>Ton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , R. G. H. Cheng, Y. Klochkov,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Taufiq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment</article-title>
          ,
          <source>arXiv preprint arXiv:2308.05374</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          , et al.,
          <source>TrustLLM: Trustworthiness in large language models</source>
          ,
          <source>arXiv preprint arXiv:2401.05561</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>J. A. S. de Cerqueira</surname>
            ,
            <given-names>A. P. D.</given-names>
          </string-name>
          <string-name>
            <surname>Azevedo</surname>
            ,
            <given-names>H. A. T.</given-names>
          </string-name>
          <string-name>
            <surname>Leão</surname>
            ,
            <given-names>E. D.</given-names>
          </string-name>
          <string-name>
            <surname>Canedo</surname>
          </string-name>
          ,
          <article-title>Guide for artificial intelligence ethical requirements elicitation - RE4AI ethical guide</article-title>
          ,
          <source>in: 55th Hawaii International Conference on System Sciences, HICSS</source>
          <year>2022</year>
          , Virtual Event / Maui, Hawaii, USA, January 4-
          <issue>7</issue>
          ,
          <year>2022</year>
          , ScholarSpace,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . URL: http://hdl.handle.net/10125/80015.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kemell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jantunen</surname>
          </string-name>
          , E. Halme, P. Abrahamsson, ECCOLA
          <article-title>- A method for implementing ethically aligned AI systems</article-title>
          ,
          <source>J. Syst. Softw</source>
          .
          <volume>182</volume>
          (
          <year>2021</year>
          )
          <article-title>111067</article-title>
          . doi:
          <volume>10</volume>
          .1016/J.JSS.
          <year>2021</year>
          .
          <volume>111067</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <source>EU AI Act: First regulation on artificial intelligence</source>
          , https://www.europarl.europa. eu/topics/en/article/20230601STO93804/eu
          <article-title>-ai-act-first-regulation-on-artificial-</article-title>
          <string-name>
            <surname>intelligence</surname>
          </string-name>
          ,
          <year>2023</year>
          . Accessed 01
          <string-name>
            <surname>Apr</surname>
          </string-name>
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Chen</surname>
          </string-name>
          , Y. Cheng, C. Zhang,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K. S.</given-names>
            <surname>Yau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. H.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiao</surname>
          </string-name>
          , C. Wu, MetaGPT:
          <article-title>Meta programming for multi-agent collaborative framework</article-title>
          ,
          <source>ArXiv abs/2308</source>
          .00352 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wu</surname>
          </string-name>
          , G. Bansal,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , E. Zhu,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , C. Wang,
          <article-title>AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework</article-title>
          ,
          <source>arXiv preprint arXiv:2308.08155</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Communicative agents for software development</article-title>
          ,
          <source>arXiv preprint arXiv:2307.07924</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E. C.</given-names>
            <surname>High-Level Expert</surname>
          </string-name>
          Group on Artificial Intelligence,
          <source>Ethics guidelines for trustworthy AI</source>
          ,
          <year>2019</year>
          . https://ec.europa.
          <article-title>eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[14] ISO/IEC 42001:2024 - Information Technology - Artificial Intelligence - Management System for Trustworthiness</source>
          ,
          <year>2024</year>
          . URL: https://www.iso.org/standard/82827.html,
          <article-title>standard published by the International Organization for Standardization and International Electrotechnical Commission</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Hevner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. T.</given-names>
            <surname>March</surname>
          </string-name>
          , J. Park, S. Ram,
          <article-title>Design science in information systems research</article-title>
          , MIS Q.
          <volume>28</volume>
          (
          <year>2004</year>
          )
          <fpage>75</fpage>
          -
          <lpage>105</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>