<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Operationalizing Evaluation in Design Science Research: A Structured Framework for Artifact Assessment in C2-System Development</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jan Lundberg</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Stockholm University</institution>
          ,
          <addr-line>Borgarfjordsgatan 12, 164 55 Kista</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Swedish Defence University</institution>
          ,
          <addr-line>Drottning Kristinas väg 37, 114 28 Stockholm</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Emerging technologies such as artificial intelligence (AI) and autonomous systems challenge the design of military command- and control systems (C2-systems). While architectural frameworks like NAF and DoDAF provide structural consistency, they ofer limited methodological guidance for system development in evolving socio-technical environments. To address this challenge, this PhD project aims to develop a new framework for C2-system design. Its development is based on a concept- and goal model and it consists of Method Chunks (MCs) and a method navigation guidance. The developed MCs have reached the level of maturity at which they need to be evaluated. Hence, this study proposes a structured evaluation process that targets not only the correctness and alignment of these artifacts, but also their practical utility in C2-system design. The proposed evaluation design is operationalized through a case study simulating the integration of drone-based Intelligence, Surveillance, and Reconnaissance (ISR) capabilities into a legacy C2-system. This setup enables a holistic analysis of the MCs and navigation guidance in a realistic, simulated C2 context. The study contributes to the operationalization of evaluation in DSR and ofers methodological insights for improving military C2-system design.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Evaluation</kwd>
        <kwd>Design Science Research (DSR)</kwd>
        <kwd>Nato Architecture Framework (NAF)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Emerging technologies such as AI, autonomous systems, and data-driven decision support are
transforming the design of military command-and control systems (C2-systems) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, while
architectural frameworks like the NATO Architecture Framework (NAF) and the Department of Defense
Architecture Framework (DoDAF) provide structural consistency, they ofer limited methodological
support for adapting to system design in complex, evolving design environments. In particular, they
lack practical method guidance and tools for addressing socio-technical challenges and technology
integration [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        This study builds on prior research that developed an initial concepts and goal-oriented framework
to support C2-system development [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] . This includes MCs [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and a MAP-based [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
model for guidance of the method process (navigation guidance). The MCs and navigation guidance
together aim to help designers select and apply design strategies grounded in several factors such as
stakeholder goals, project goals and requirements, as well as emerging technologies. These artifacts are
the product of a Design Science Research [11] (DSR) project that engages enterprise modeling [12] with
decomposition of high-level goals to requirements, the construct of MCs following Situational Method
Engineering [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [13] (SME) and validation through stakeholder engagement [14].
      </p>
      <p>Despite the focus of design and evaluation cycles s DSR, scholars have emphasized that evaluation
practices often lack coherence and the need for evaluation strategies that are context-driven and
applicable to iterative, real-world scenarios [15], [16], [17], [18], [19].</p>
      <p>The framework under development has now reached the level of maturity that can be regarded as
the first iteration. Its parts have been validated with stakeholders. The next step in this development
cycle is evaluation. Hence, this study addresses that gap by operationalizing evaluation as a structured,
multi-dimensional process and uses case study-based evaluation design, in which stakeholders apply
the MCs and the navigation guidance to a simulated capability development scenario. The focus lies
on assessing not just the correctness and relevance of the artifact, but also its usability and utility in
supporting real-world design decisions. The goal of this paper is to present the evaluation design.</p>
      <p>The remainder of this article is organized as follows; Section 2 provides the research foundation and
introducing the challenge of artifacts evaluation. Section 3 presents the proposed evaluation approach.
Section 4 outlines the expected contributions, and Section 5 concludes the study and outlines directions
for future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Current Research and Challenge</title>
      <sec id="sec-2-1">
        <title>2.1. Current Research</title>
        <p>Existing architectural frameworks that specifically support military capability development, such
as Nato Architectural Framework (NAF), [20] and Department of Defence Architectural Framework
(DoDAF) [21], provide structural consistency in the design process but they lack guidance on how to
adapt to emerging technology and evolving socio-technical and tactical realities. This PhD research
project addresses these shortcomings by proposing a conceptual framework grounded in Design Science
Research (DSR) [22].</p>
        <p>
          The research project has progressed through seven articles related to the DSR phases outlined in [23].
Figure 1 shows an overview of the research process and related publications. In the first article [24],
C2system development is framed as a System-of-Systems with Socio-Technical challenges, highlighting the
need for integration of new technology into legacy systems. The second article [25], clarifies that military
capability is a socio-technical phenomenon, underpinning the need for reasoning beyond technical
systems. The third article [26] presented an interview-based study identifying stakeholder needs and
challenges and outlines high-level goals that serve as inputs for later modelling activities. Building on
this, the fourth article [27] introduces a goal-oriented modeling approach using for enterprise modeling
method (4EM) [12], combining a concepts model and a stakeholder-driven goal model. The fifth article
[28] deepened this reasoning by a deeper goal decomposition and refined conceptual structures. This
lays the foundation for linking high-level design goals to corresponding MCs, enabling traceability
and support of design decisions. The sixth article [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] employed focus group-based validation to assess
coherence and practical relevance, further refining the models. Finally, the seventh article (not yet
published) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] proposes a design of reusable MCs following Situational method engineering (SME) [29],
[30], [13] principles, as well as a MAP-based navigation guidance, further developing the framework.
        </p>
        <p>
          The proposed envisioned artifact consists of (i) a set of Method Chunks (MCs), (ii) a MAP-based
Navigation Guidance, and (iii) an underlying goal- and concept model, validated in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. MCs are
constructed using SME and capture reusable, domain-specific steps to support C2-system design.
The Navigation Guidance functions at a higher abstraction level, providing process support by assists
designers select and sequence MCs based on specific design intentions. The framework builds on 4EM by
reusing its modeling structures while specializing and tailors them for military capability-development
in a C2-system context.
        </p>
        <p>This study contributes to the overarching PhD project by proposing a structured evaluation process,
tailored to assess usability, fit, and efectiveness of MCs in real-world or simulated C2-system design
settings. While this study focuses on designing the evaluation of MCs and the navigation guidance, it
potentially contributes more broadly to ongoing discussions on evaluation practices in DSR.</p>
        <p>Several researchers have highlighted that evaluation in DSR occasionally lacks methodological
coherence, particularly when applied to complex or methodological artifacts [15], [16]. To address this,
recent work emphasizes the need for evaluation strategies that; i) are responsive to artifact type and
context [15], [31], ii) consider not only utility but also practical integration [16], and iii) can be adapted
and applied iteratively real-world settings [18], [17]. These insights can be considered requirements for
evaluation design, and they also strengthen the relevance of this study by operationalize such principles.</p>
        <p>
          At this stage, the research generated several insights that inform the design of the evaluation and the
continued development of the framework. Feedback from stakeholders confirmed that the goal and
concept models are relevant, comprehensible, and useful for structuring discussions about capability
development [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This feedback also indicated that the decomposition from high-level goals to sub-goals
and related requirements provides a meaningful foundation for linking goals to method components.
        </p>
        <p>
          The initial work on Method Chunks (MCs) shows that the structure of intentions, strategies, inputs,
and outputs can be expressed coherently and aligned with underlying goal model. Although the full
set of MCs is still under development, the early versions outlined in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], suggest that the approach is
suitable for supporting design activities, and that the overall method framework becomes increasingly
traceable as the remaining MCs are constructed.
        </p>
        <p>Taken together, these preliminary results indicate that the framework is approaching a level of
maturity where a structured evaluation, focusing on practical utility, will be feasible once the remaining
MCs have been fully constructed.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Design Science Research (DSR)</title>
        <p>Design Science Research (DSR) is a research framework that focuses on the construction and
improvement of artifacts intended to solve real-world problems. Central in DSR is its iterative approach, where
the research process cycles through the phases 1) explicate problem, 2), define requirements, 3) design
and develop artifact, 4) demonstrate artifact, and 5) evaluate artifact.</p>
        <p>Evaluation should not only be understood as a quality control mechanism but also as a source of
insight for further artifact development, and the cyclical process enables continuous refinements of the
artifact. Without rigorous evaluation, DSR risks producing artifacts that may be relevant in design but
misaligned within a real-world application context.</p>
        <p>In this study, the artifact under evaluation consists of a set of MCs and the navigation guidance
designed to support architectural reasoning in military C2-system development. The DSR framework
guides not only the construction of these MCs but also the planned demonstration and evaluation
approach.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. The Challenge</title>
        <p>This study highlights the challenge of assessing design frameworks, and specifically MCs, not only for
modeling-method soundness (verification), and alignment with stakeholder understanding (validation),
but also for the framework practical utility in a real-world military context (evaluation). While evaluation
in DSR is often framed as a distinct phase [11], [23], the authors in [32] highlights the value of
distinguishing between verification, validation, and evaluation (VVE), as each address diferent yet
interdependent aspects of artifact quality.</p>
        <p>Verification addresses whether the artefact has been built according to its specification and if it is
consistent with existing theories and design principles. This includes structural coherence and cohesion
to design rules. Validation assesses whether the artifact leads to outputs that are relevant and meaningful
in real-world contexts, i.e., if the output fits the stakeholders’ needs and requirements. Evaluation deals
with practical utility and explore whether the artifact is efective and worthwhile to use in practice.
This includes artifact eficiency and potential added value relative to existing artifacts.</p>
        <p>This study adopts the VVE framework as a structured lens for organizing the assessment of MCs and
the navigation guidance. MCs are constructed to support reasoning, decision-making, and architectural
alignment in C2-system development. However, their efectiveness cannot be assessed easily. Their
practical value depends on how they are experienced by stakeholders, if they are understandable,
actionable, and seamlessly integrated into existing frameworks.</p>
        <p>The challenge, therefore, lies in designing a VVE-informed assessment process that addresses all
three quality dimensions. MCs must be verified for internal consistency, validated against stakeholder
expectations, and evaluated for their capacity to support an eficient design process. This is particularly
important in C2-system development, where supporting frameworks and design tools must operate
under time pressure, across socio-technical boundaries, and in environments characterized by rapidly
evolving technologies.</p>
        <p>The envisioned framework aims to (i) strengthen goal-driven architectural reasoning, (ii) ensure
traceability from high-level goals to MC selection and design outputs, (iii) support flexible,
intentiondriven method enactment, and (iv), because of i-iii, provide structured guidance for integrating emerging
technologies into military C2-systems.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Approach</title>
      <p>To operationalize the evaluation of the MCs, this study proposes a case study research strategy, anchored
in VVE. The focus is not only on the MCs, but also on the navigation guidance, i.e., the previous designed
MAP-based model that supports selection of MCs in C2-system design scenarios. Since the navigation
guidance guides designers in selecting and sequencing MCs based on design intentions, it becomes
important to evaluate both its correctness and utility. In this study, both MCs and the navigation guidance
are treated as evaluation subjects across the three VVE dimensions; Verification (V1), Validation, and
Evaluation.</p>
      <p>The evaluation consists of two phases. First, an assessment targeting the navigation guidance. This
phase will engage domain experts to assess whether the navigation guidance structure is logically
complete and accurately reflects the intended design pathways, i.e., that each chunk is appropriately
linked to each other, and that designer intuitively can traverse the model without confusion. This
feedback allows refinement of the Navigation Guidance before its use in the more demanding case
study setting. See the Navigation Guidance in Figure 1.</p>
      <p>The planned evaluation will be conducted as a case study. While case studies preferably take place
in naturalistic settings, conducting one in the context of C2-system development presents notable
challenges. The integration of real-world sensors, data flows, decision-making tools, and efectors,
spanning all five military domains , is highly complex and not feasible within this research setup.
Although other fields also must integrate sensors, data flows, and decision-support tools, military
C2-system development includes distinct challenges such as multi-domain coordination (land, air,
maritime, cyber, space), adversarial and contested environments (jamming, deception, cyber-attacks),
information-classification boundaries, and integration of lethal/non-lethal efectors. These specific
challenges necessitate specific method support.</p>
      <p>To address this, the scenario used within the case study will be simulated rather than drawn from
a real-world setting, designed to realistically reflect relevant design challenges. The evaluation uses
a simulated integration of drone-based ISR into a legacy C2 system. While simplified, it reproduces
essential characteristics, such as cross-domain information flows, contested-environment, time-sensitive
decision cycles and coordination between humans and autonomy systems. This ensures that the scenario
remains realistic for capability-development activities while still feasible for controlled evaluation of the
MCs and Navigation Guidance. The scenario will centre on relevant capability development challenges,
such as mission planning and execution, and integration of new and emerging technologies, allowing
MCs and the navigation guidance to be evaluated under authentic conditions. Due to the complexity of
actual C2-system environments, the evaluation will focus on a specific process or parts of the system,
rather than simulating full system behaviour.</p>
      <p>In the scenario, the plan is to use the Navigation Guidance to identify and apply relevant corresponding
MCs. Participants will be asked to perform activities such as analysing and refining the method chunks.
(e.g., see MC1 in Figure 3).</p>
      <p>Throughout the session, empirical data are collected mainly through post-task interviews, and
feedback surveys. These are analysed to assess the three dimensions; 1) Verification, Are the MCs
and the Navigation Guidance accurate constructed?, 2) Validation, Do the MCs and the Navigation
Guidance align with stakeholder needs and design logic in the given scenario?, and 3) Evaluation, Are
the MCs and the Navigation Guidance useful, usable, and efective in supporting designers reasoning
and structuring work?</p>
      <p>Note that VVE as such, consequently, is holistic, assessing not just whether the artifacts work, but
whether they are worth using in a real design setting. The table 1 below summarizes how each VVE
dimension applies to both the MCs and the Navigation Guidance.</p>
      <p>Findings from these VVE activity are used to refine not only MCs and the Navigation Guidance but,
if possible, to loop back and refine elements of the goal model and concept model. For example, if a
high-level goal, expressed as an intention, in the Navigation Guidance is misunderstood or misapplied,
it indicates a need to adjust phrasing or content. In this way, the VVE process does not end, rather it
drives an iterative loop of improvement of the framework.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Contribution Vision and Next Steps</title>
      <p>On a practical level, this study presents an evaluation setup based on a relevant capability development
scenario, namely, the integration of an ISR capability into a C2-system. The scenario provides a test
bed to explore if MCs and the Navigation Guidance help designers reason through problems and make
informed design decisions.</p>
      <p>The findings aim not only to refine the envisioned framework but also to contribute insights into
- Are the method chunks aligned with - Are the goal definitions intuitive and
sestakeholder needs and design challenges? mantically aligned with operational logic?
- Do method chunks’ outputs address real- - Do users interpret navigation paths as
inworld design intentions? tended?
- Are the method chunks understandable, - Do the Navigation Guidance MAP
usable, and are the out-comes relevant? help users navigate, i.e., find appropriate
- Do they improve clarity and decision- chunks?
making in the specific case context? - Does the Navigation Guidance MAP
support flexible navigation and goal
reassessment?
how structured method guidance can be embedded in military C2-system design practices, particularly
in contexts aligned with capability development frameworks such as NAF.</p>
      <p>This study proposes a structured approach to operationalizing evaluation within DSR, with particular
focus on assessing MCs and their supporting Navigation Guidance. The proposed VVE strategy
targets the quality of the MCs, aiming to evaluate their usefulness and usability in military C2-system
development.</p>
      <p>To bridge the gap between theory and practice, the evaluation extends the VVE perspective to include
whether the artifact (i.e., the MCs and Navigation Guidance) is worth using in realistic design scenarios.
The planned case study, centered on the integration of a drone-based ISR capability into an existing
C2 system, ofers a relevant setting for this. It enables the use of both MCs and Navigation Guidance
under simulated operational conditions, with active stakeholder engagement in the design process. The
evaluation aims to determine whether the artifacts help designers reason efectively and support their
design decisions.</p>
      <p>Future work will focus on executing the planned case study and refining the framework. This includes
revisiting earlier design artifacts, such as the Goal- and Concept Models, the MCs, and the Navigation
Guidance.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgments</title>
      <p>This PhD thesis project is a collaboration between Swedish Defence University and Stockholm University.
Supervisors of the project are Janis Stirna (Stockholm University) and Kent Andersson (Swedish Defence
University).</p>
    </sec>
    <sec id="sec-6">
      <title>6. Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used Avidnote (https://avidnote.com/) to grammar and
spelling check. After using Avidnote, the author reviewed and edited the content as needed and take
full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-7">
      <title>7. References</title>
      <p>11. Hevner, March, Park, and Ram, “Design Science in Information Systems Research,” MIS Quarterly,
vol. 28, no. 1, p. 75, 2004, doi: 10.2307/25148625.
12. K. Sandkuhl, J. Stirna, A. Persson, and M. Wißotzki, “Enterprise Modeling Tackling Business</p>
      <p>Challenges with the 4EM Method,” 2014. doi: 10.1007/978-3-662-43725-4.
13. B. Henderson-Sellers, J. Ralyté, P. J. Ågerfalk, and M. Rossi, “Situational method engineering,”</p>
      <p>Situational Method Engineering, pp. 1–310, Jan. 2014, doi: 10.1007/978-3-642-41467-1.
14. J. Lundberg, K. Andersson, and J. Stirna, “Enhancing C2-Systems: Validation of Goal and Concept
Models with Stakeholders,” in Enterprise, Business-Process and Information Systems Modeling.
BPMDS EMMSAD 2025, R. Guizzardi, A. Sturm, L. Pufahl, and H. van der Aa, Eds., Vienna:
Springer, Jun. 2025.
15. N. Prat, I. Comyn-Wattiau, and J. Akoka, “Artifact evaluation in information systems
designscience research - A holistic view,” PACIS 2014 Proceedings, Jan. 2014, Accessed: Oct. 01, 2025.
[Online]. Available: https://aisel.aisnet.org/pacis2014/23
16. J. Müller, S. Wurth, T. Schäfer, and C. Leyh, “Toward a Framework for Determining Methods of
Evaluation in Design Science Research,” Annals of Computer Science and Intelligence Systems,
no. 2024, pp. 231–236, 2024, doi: 10.15439/2024F7208.
17. S. Mdletshe, O. S. Motshweneng, M. Oliveira, and B. Twala, “Design science research application
in medical radiation science education: A case study on the evaluation of a developed artifact,” J
Med Imaging Radiat Sci, vol. 54, no. 1, pp. 206–214, Mar. 2023, doi: 10.1016/j.jmir.2022.11.007.
18. J. da A. Moutinho, G. Fernandes, and R. Rabechini, “Evaluation in design science: A framework
to support project studies in the context of University Research Centres,” Eval Program Plann,
vol. 102, p. 102366, Feb. 2024, doi: 10.1016/J.EVALPROGPLAN.2023.102366.
19. J. Venable, J. Pries-Heje, and R. Baskerville, “A comprehensive framework for evaluation in
design science research,” Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7286 LNCS, pp. 423–438, 2012,
doi: 10.1007/978-3-642-29863-9_31.
20. Nato, “Nato Architecture Framework Version 4,” 2020.
21. “DODAF - DOD Architecture Framework Version 2.02.” Accessed: Jun. 13, 2024. [Online].</p>
      <p>Available: https://dodcio.defense.gov/library/dod-architecture-framework/
22. A. Hevner and S. Chatterjee, “Design Research in Information Systems,” vol. 22, 2010, doi:
10.1007/978-1-4419-5653-8.
23. P. Johannesson and E. Perjons, “An Introduction to Design Science,” 2014. doi:
10.1007/978-3-31910632-8.
24. J. Lundberg, “Towards a Conceptual Framework for System of Systems,” in CEUR Workshop
Proceedings, 2023. Accessed: May 01, 2024. [Online]. Available:
https://ceur-ws.org/Vol3407/paper3.pdf
25. K. Andersson, J. Lundberg, and J. Stirna, “Emerging technology calls for a systemic view on
military capability,” G. Poels, J. Van Riel, and R. Fernandes Calhau, Eds., Vienna: CEUR-WS.org,
2023. Accessed: Apr. 04, 2024. [Online]. Available: https://ceur-ws.org/Vol-3645/facete2.pdf
26. J. Lundberg, J. Stirna, and K. Andersson, “Designing Military Command and Control Systems as
System of Systems – An Analysis of Stakeholder Needs and Challenges,” Advanced Information
Systems Engineering, pp. 336–351, Jun. 2024, doi: 10.1007/978-3-031-61057-8_20.
27. J. Lundberg, J. Stirna, J. Zdravkovic, and K. Andersson, “Beyond Technology: Goal-Oriented
Analysis for Integrating Emerging Technologies into Military Command and Control Systems,”
in 29th International Command and Control Research and Technology Symposium, London, Sep.
2024.
28. J. Lundberg, S. Hacks, and K. Andersson, “Refinement of a Conceptual Model of a military
C2system through Low-Level Goal Decomposition,” in Companion Proceedings of the 17th IFIP WG
8.1 Working Conference on the Practice of Enterprise Modeling Forum, 2024. Accessed: Dec. 04,
2024. [Online]. Available: https://ceur-ws.org/Vol-3855/forum9.pdf
29. S. Brinkkemper, “Method engineering: engineering of information systems development
methods and tools,” Inf Softw Technol, vol. 38, no. 4, pp. 275–280, Jan. 1996, doi:
10.1016/09505849(95)01059-9.
30. J. Ralyté and C. Rolland, “An approach for method reengineering,” Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 2224, pp. 471–484, 2001, doi: 10.1007/3-540-45581-7_35.
31. J. Venable, J. Pries-Heje, and R. Baskerville, “A comprehensive framework for evaluation in
design science research,” Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7286 LNCS, pp. 423–438, 2012,
doi: 10.1007/978-3-642-29863-9_31.
32. J. Ralyté, G. Koutsopoulos, and J. Stirna, “Verification, validation, and evaluation of
modeling methods: experiences and recommendations,” Software and Systems Modeling, 2025, doi:
10.1007/S10270-025-01304-2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J.</given-names>
            <surname>Weingarten</surname>
          </string-name>
          , “
          <article-title>Developing future capabilities: Robotics and autonomous systems</article-title>
          ,”
          <year>2023</year>
          . Accessed: Jul.
          <volume>22</volume>
          ,
          <year>2024</year>
          . [Online]. Available: https://www.nato-pa.
          <source>int/document/2023-robotics-andautonomous-systems-report-weingarten-034-stctts</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Verbancsics</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Gutzwiller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Reeder</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Sarles</surname>
          </string-name>
          , “Command and Control of Teams of Autonomous Systems” Berlin, Heidelberg: Springer,
          <year>2012</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>93</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -34059-
          <issue>8</issue>
          _
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>P.</given-names>
            <surname>Scharre</surname>
          </string-name>
          , Army of None:
          <article-title>Autonomous Weapons and the Future of War</article-title>
          . W. W. Norton &amp; Company, 2018
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kotusev</surname>
          </string-name>
          , “
          <article-title>A Comparison of the Top Four Enterprise Architecture Frameworks</article-title>
          ,”
          <year>2021</year>
          , Accessed: Jun.
          <volume>27</volume>
          ,
          <year>2025</year>
          . [Online]. Available: https://www.bcs.
          <article-title>org/articles-opinion-and-research/acomparison-of-the-top-four-enterprise-architecture-frameworks/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>J.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Andersson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ralyté</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Stirna</surname>
          </string-name>
          , “
          <article-title>Supporting Future Military Command-</article-title>
          and
          <string-name>
            <surname>Control (C2) System</surname>
          </string-name>
          <article-title>Design: Defining Method Components to Enhance the Existing Development Framework,” in (To appear in) ICCRTS25 - International Command</article-title>
          and Control Research &amp; Technology symposium, Stockholm,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Andersson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Stirna</surname>
          </string-name>
          , “
          <source>Enhancing C2-Systems: Validation of Goal and Concept Models with Stakeholders,” Lecture Notes in Business Information Processing</source>
          , vol.
          <volume>558</volume>
          LNBIP, pp.
          <fpage>353</fpage>
          -
          <lpage>367</lpage>
          ,
          <year>2025</year>
          , doi: href 10.1007/978-3-
          <fpage>031</fpage>
          -95397-2_
          <fpage>22</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J.</given-names>
            <surname>Ralyté</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Backlund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kühn</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Jeusfeld</surname>
          </string-name>
          , “
          <article-title>Method chunks for interoperability</article-title>
          ,
          <source>” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          , vol.
          <volume>4215</volume>
          LNCS, pp.
          <fpage>339</fpage>
          -
          <lpage>353</lpage>
          ,
          <year>2006</year>
          , doi: 10.1007/11901181_
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>B.</given-names>
            <surname>Henderson-Sellers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gonzalez-Perez</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Ralyté</surname>
          </string-name>
          , “
          <article-title>Comparison of method chunks and method fragments for situational method engineering</article-title>
          ,
          <source>” Proceedings of the Australian Software Engineering Conference</source>
          , ASWEC, pp.
          <fpage>479</fpage>
          -
          <lpage>488</lpage>
          ,
          <year>2008</year>
          , doi: 10.1109/ASWEC.
          <year>2008</year>
          .
          <volume>4483237</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>C.</given-names>
            <surname>Rolland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Prakash</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Benjamen</surname>
          </string-name>
          , “
          <article-title>A multi-model view of process modelling,” Requir Eng</article-title>
          , vol.
          <volume>4</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>169</fpage>
          -
          <lpage>187</lpage>
          ,
          <year>1999</year>
          , doi: 10.1007/S007660050018.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. C. Rolland, “
          <article-title>Capturing System Intentionality with Maps</article-title>
          .
          <source>Conceptual Modelling in Information Systems Engineering</source>
          ,” pp.
          <fpage>141</fpage>
          -
          <lpage>158</lpage>
          ,
          <year>2007</year>
          , Accessed: Jun.
          <volume>24</volume>
          ,
          <year>2025</year>
          . [Online]. Available: https://hal.science/hal-00706146v1
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>