Development and Adoption of SATD Detection Tools: A State-of-practice Report Edi Sutoyo1,2 , Andrea Capiluppi1 1 Bernoulli Institute, University of Groningen, Groningen, The Netherlands 2 Department of Information Systems, Telkom University, Bandung, Indonesia Abstract Self-Admitted Technical Debt (SATD) refers to instances where developers knowingly introduce suboptimal solutions into code and document them, often through textual artifacts. This paper provides a comprehensive state-of-practice report on the development and adoption of SATD detection tools. Through a systematic review of the available literature and tools, we examined their overall accessibility. Our findings reveal that, although SATD detection tools are crucial for maintaining software quality, many face challenges such as technological obsolescence, poor maintenance, and limited platform compatibility. Only a small number of tools are actively maintained, hindering their widespread adoption. This report discusses common anti-patterns in tool development, proposes corrections, and highlights the need for implementing Findable, Accessible, Interoperable, and Reusable (FAIR) principles and fostering greater collaboration between academia and industry to ensure the sustainability and efficacy of these tools. The insights presented here aim to drive more robust management of technical debt and enhance the reliability of SATD tools. Keywords self-admitted technical debt, SATD, detection tools, FAIR principles, software engineering practices 1. Introduction Technical Debt (TD) broadly refers to suboptimal code or design choices that compromise long-term software maintainability [1]. Within this domain, self-admitted technical debt (SATD) refers to instances where developers knowingly document suboptimal code implementations, often through comments in the source code [2]. Detecting and managing SATD has become increasingly important due to its significant impact on long-term software maintainability [3]. Over the past decade, various tools and approaches have been proposed to automate the detection of SATD, reflecting a growing recognition of its importance to software maintainability. However, despite theoretical advancements, the transition from proposed solutions to widely accessible, practically implementable tools remains a significant challenge in the field. Several studies have developed automated approaches to identify SATD in source code comments, with tools such as SATDBailiff [4] and DebtHunter [5] enabling more effective tracking and management of SATD instances. These tools are critical for managing technical debt, as SATD is prevalent in software projects, affecting 2.4% to 31% of files. SATD can persist for extended periods, with a median lifespan ranging from 18 to 172 days and, in some cases, surviving for over 1,000 commits [6]. The availability of reliable tools facilitates better identification and management of SATD, helping developers address technical debt and improve overall software quality. This paper presents a state-of-practice report on the current landscape of SATD detection tools. Based on a recently completed systematic literature review [7], it provides an evaluation of the available software tools, assessing their functionality and limitations across various dimensions, including acces- sibility, platform compatibility, and performance in real-world applications. In addition to examining the practical aspects of these tools, this paper identifies recurring anti-patterns—such as poor main- tenance, lack of interoperability with modern platforms, and inadequate documentation—that hinder BENEVOL24: The 23rd Belgium-Netherlands Software Evolution Workshop, November 21-22, Namur, Belgium $ e.sutoyo@rug.nl (E. Sutoyo); a.capiluppi@rug.nl (A. Capiluppi)  0000-0002-8413-5070 (E. Sutoyo); 0000-0001-9469-6050 (A. Capiluppi) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings their broader adoption and effectiveness. Based on this analysis, we propose actionable corrections to these anti-patterns to improve the sustainability, usability, and future-proofing of SATD detection tools. Our findings offer insights into the current gaps in the state-of-practice and suggest practical improvements that can drive more robust and reliable SATD management in software development. This paper is articulated as follows: Section 2 presents the related work and reviews existing research on SATD detection tools, focusing on their development, adoption, and challenges. Section 3 summarizes the methodology and categorises the tools based on their functionality and availability. Section 4 identifies common anti-patterns in SATD tools availability, while Section 5 proposes actionable strategies for combating these anti-patterns. Finally, Section 6 concludes the paper. 2. Related work Research software plays a crucial role in modern science, but its unavailability or malfunction can have serious repercussions. Lack of access to software and data hinders replication of studies, wastes resources on reinventing existing tools, and limits research opportunities [8]. Many published papers fail to provide accessible data or documentation on outlier handling, impeding reproducibility [9]. At the same time, insufficient software engineering practices in research can undermine Findable, Accessible, Interoperable, and Reusable (FAIR) principles [8], and the absence of source code compromises peer review and may bias subsequent work [10]. To address these issues, experts have long recommended adopting reproducible research practices, which involve publishing both papers and their computational environments [11, 12]. This approach can serve as a minimum standard for evaluating scientific claims when full independent replication is not feasible, ultimately enhancing the reliability and transparency of computational research [12]. Researchers have shared tools to identify SATD to facilitate empirical studies and improve software maintenance. These tools, such as SATDBailiff [4] and SATD Detector [13], use text mining and machine learning techniques to automatically detect SATD in source code comments, commit messages, pull requests, and issue trackers [14]. Studies have shown that SATD is common in software projects, affecting 2.4%-31% of files [2], and can be effectively identified using automated approaches with high precision and recall [15, 16]. Researchers have also explored specific types of SATD, such as “on-hold" SATD [17], and investigated the gap between admitted and measured technical debt [18]. These tools and studies contribute to better understanding and managing SATD, improving software quality and maintenance practices. 3. Methodology To evaluate the current state-of-practice of software tools designed to detect SATD, we conducted a systematic literature review (SLR) of academic literature and available tools [7]. The review employed search terms such as: (“self-admitted technical debt" OR SATD) OR (“technical debt" AND NLP) AND (detect* OR identif* OR predict*) AND (“software engineering" OR “software development"). We intentionally used a broad search string without specifically including the term “tool." This approach was intentional to capture as many SATD detection approaches as possible, ensuring a comprehensive review and minimizing the risk of overlooking relevant studies. This study builds upon the SLR that identified 68 papers on SATD detection approaches. While the SLR provided a detailed analysis of these approaches, the focus of this paper is narrower, centering specifically on tools identified through the review. We analyze their accessibility and practical utility, highlighting gaps and proposing actionable solutions to address identified anti-patterns. By presenting these contributions, this paper complements the SLR by emphasizing practical insights that can guide the development and improvement of SATD detection tools. From these 68 studies, we carefully reviewed each paper to determine which ones not only proposed an approach but also offered a prototype or ready-to-use tool. We attempted to access each tool through the links or repositories provided in the papers. Tools were classified into three categories: • Accessible and functional: Tools that could be successfully accessed and run. • Inaccessible or broken link: Tools with dead or missing links, making them unavailable. • Obsolete or non-functional: Tools that could be accessed but were incompatible with modern platforms or could not be run successfully. Out of the 68 papers, 60 primarily focused on methodologies, frameworks, or approaches without providing prototypes or implemented tools. These papers enriched our understanding of SATD detection but did not meet the criteria for tool evaluation in this study. Table 1 SATD detection tools proposed from 2014-2024 - I stands for ‘Identification’, and C for ‘Categorization’ Name Ref. Year Task Description Category DebtViz [14] 2023 C A tool that detects, classifies, visualizes, and monitors Accessible SATD, categorizing several debt types on a single platform A browser [19] 2022 C A browser extension using an ML model to automatically Inaccessible extension classify SATD types in rOpenSci R packages. SATDBailiff [4] 2022 I A tool designed to mine, identify, and track SATD Accessible FixMe [20] 2021 C A GitHub bot that is developed to detect, monitor, and no- Broken link tify developers about On-hold SATD in their repositories DebtHunter [5] 2021 I, C A machine learning-based tool for detecting SATD Accessible SATD De- [13] 2018 I A Java library and Eclipse plug-in that automatically de- Obsolete tector tects SATD in comments and integrates with an IDE for easier management eXcomment [21, 22] 2015 I A tool designed to parse Java source code and fetch code Broken link comments to identify SATD As shown in Table 1, three tools, namely DebtViz [14], SATDBailiff [4], and DebtHunter [5], are cur- rently accessible. However, two tools, FixMe [20] and eXcomment [21, 22], have broken links. Another tool, described as “A browser extension” in its paper [19], does not provide a valid link (inaccessible). Additionally, SATD Detector [13] is obsolete due to incompatibility with newer environments. 4. Anti-patterns in SATD tools availability The state of SATD detection tools presents significant implications for researchers and practitioners. Many tools are outdated or incompatible, limiting the reliability of empirical studies and real-world applications [23]. Below, we isolate the implications of the unavailability of SATD detection tools. • Reliability of findings - A limited number of functional SATD tools restricts the generalizability of experimental results. Using only a few tools can fail to capture the variety of SATD identification techniques, increasing the risk of non-representative findings and affecting reproducibility across different programming languages and projects [24]. • Bias in analysis - SATD tools employ unique algorithms, which may introduce bias in analyses. If only a few tools are used, researchers risk favoring certain types of SATD while neglecting others, limiting comprehensiveness. Access to a diverse range of tools is essential for balanced detection [25]. • Incomplete coverage of SATD - SATD encompasses various debt types, including code, design, and documentation debt [4]. Each type presents unique challenges and implications for software quality and maintainability. However, many existing SATD detection tools are designed to target specific debt types, often focusing narrowly on code-level debt. This limited scope can result in critical aspects of SATD—such as architectural design flaws or insufficient documentation—being overlooked. Such gaps in detection not only skew research conclusions but also hinder effective debt management practices in real-world projects [26]. • Technological obsolescence - Outdated tools and broken links highlight issues regarding software maintenance in the SATD detection community. Obsolescence hampers usability and sustainabil- ity, making it difficult for practitioners to adopt these tools. Continuous updates and adherence to best practices are crucial for maintaining relevance and accessibility [23]. Technological obso- lescence is particularly evident in tools like SATD Detector, which became incompatible with modern platforms after its initial release. To ensure long-term sustainability, it is vital to foster community collaboration in tool develop- ment, incorporating FAIR principles to enhance the usability of SATD detection tools [27]. • Sustainability and reproducibility - Maintaining SATD tools through academic-industry partner- ships can align development with real-world needs. Regular benchmarking and case studies will help sustain tool accuracy and reliability [4]. By promoting open-source collaboration, the community can mitigate technological risks and enhance the longevity of SATD tools [28]. 5. Combating the anti-patterns The sustainability of SATD detection tools is crucial for both academia and industry. Many tools become unavailable or outdated shortly after publication, hindering long-term technical debt management [23]. Below we discuss some practical actions that the SATD community should discuss for the sustainability of its SATD detection tools. Promoting diverse tools Promoting a diverse range of SATD detection tools is vital for improving technical debt identification. By leveraging multiple tools that employ varied detection techniques, it becomes possible to minimize the occurrence of false positives and false negatives, thereby improving the reliability of the results. Studies indicate that combining diverse static analysis tools enhances detection coverage while maintaining manageable false alarm rates [25]. Additionally, creating comprehensive, centralized archives for SATD tools is crucial for ensuring their long-term availability and accessibility [29]. These archives serve as repositories where tools can be preserved and maintained, preventing them from becoming inaccessible or forgotten over time. Furthermore, developing adaptable tools enhances resilience against obsolescence [24]: adaptable tools should evolve alongside changing software environments, ensuring that they remain relevant and effective in detecting SATD even as programming languages, frameworks and methodologies advance. Implementing FAIR principles Adherence to FAIR principles is crucial for the sustainability of SATD tools. Guidelines from other fields, such as biomedical research, offer valuable insights into how FAIR principles can be applied [26]. Tools like FAIRshare and OpenEBench can support FAIR compliance in SATD detection tool development [26, 30]. The FAIR-USE4OS guidelines extend these principles to include User-Centered, Sustainable, and Equitable aspects, ensuring tools are reusable, reproducible, and equitable [27]. Building on these principles, a key strategy involves hosting SATD tools on reliable, long-term repositories. For example, using GitHub1 and preserving scientific research outputs (including tools) on platforms like Zenodo2 helps maintain accessibility, sustainability, and adherence to FAIR principles. These platforms not only safeguard the tools from obsolescence, but also enable widespread sharing, thereby maintaining accessibility and sustainability. Features like version control are vital for tracking changes and maintaining historical records of tool development [31]. This level of transparency encourages collaborative improvement and strengthens the reproducibility of results—both crucial aspects of research and software engineering. 1 https://github.com 2 https://zenodo.org Explicit versioning further enhances the traceability and clarity of tools [32], making it easier for developers and researchers to locate and identify specific releases. As a result, replicating experiments, addressing issues, or building upon existing work becomes more straightforward. Documenting how a tool has evolved over time allows users to choose the version that best fits their needs. For instance, GitHub integrates versioning seamlessly with common workflows, ensuring consistency in updates and deployments, while Zenodo assigns Digital Object Identifiers (DOIs) to each version, providing persistent and reliable references [33]. Open-source practices for SATD tools Open-source practices are essential for the sustainability and continued evolution of SATD detection tools. By adopting open-source models, developers can encourage active maintenance and community involvement, significantly reducing the risk of tool obsolescence [28]. Open-source ecosystems provide a foundation for collective innovation, where developers, researchers, and practitioners can contribute to the evolution of SATD tools. One of the core benefits of integrating open-source practices with FAIR principles is the enhancement of transparency and collaboration. This transparency facilitates peer review and validation and acceler- ates innovation by enabling developers to build upon each work. Adopting the FAIR-USE4OS guidelines further strengthens this approach by emphasizing User-Centered, Sustainable, and Equitable aspects [27]. These guidelines ensure that SATD tools address diverse user needs, support long-term usability, and promote equitable access to software, enhancing their cross-domain relevance and societal impact. A cornerstone of open-source best practices is straightforward and comprehensive documentation. This includes user guides, developer instructions, and metadata detailing the tool’s purpose, dependen- cies, and functionalities. Well-maintained documentation empowers both novice and experienced users to effectively utilize and contribute to the tool’s development [11, 34, 35]. Furthermore, adopting licenses that support open-source distribution, such as MIT, GPL, or Apache licenses, is essential [36]. These licenses clarify usage rights, encourage reuse, and protect intellectual property, fostering trust among users and contributors. Encouraging the use of standards and modular architectures can also improve interoperability and integration with other tools, making SATD detection solutions more versatile and adaptable to various contexts. Enhancing academia-industry collaboration Fostering collaboration between academia and industry is crucial for aligning SATD tools with real-world needs and challenges. Academic research often focuses on theoretical advancements and experimental frameworks, while industry seeks practical solutions that can be seamlessly integrated into existing workflows. Bridging this gap ensures that SATD tools address academic research questions and provide tangible benefits to practitioners managing technical debt in live software systems. Tool benchmarking is a critical step in this process, offering a way to validate the effectiveness, scalability, and usability of SATD tools across diverse scenarios. By utilizing real-world datasets and conducting case studies, researchers can demonstrate the practical applicability of their tools, building trust and interest within the industry [4]. These collaborative efforts also help identify gaps between research innovations and industry requirements that will enable the iterative refinement of tools to better serve both domains [37]. Continuous evaluation through practical use cases is key to ensuring that SATD detection tools remain adaptable and valuable over time [38]. By integrating these tools into actual software development and maintenance environments, both researchers and industry stakeholders can observe how the tools perform under varying conditions, such as different programming languages, team sizes, or project complexities. This hands-on feedback allows developers to refine features, optimize performance, and improve user experience. Academia and industry partnerships can also lead to the development of shared benchmarks, datasets, and metrics, fostering standardization and comparability across tools. These efforts promote robust and sustainable tool development practices that directly address industry pain points. Ultimately, such collaboration enhances the effectiveness of SATD detection tools and contributes to improving software quality, reducing technical debt, and fostering innovation in software engineering. Table 2 Mapping between anti-patterns, practical actions and tools affected Anti-pattern Practical Action Tool Affected Reliability of findings Promoting diverse tools DebtViz, A browser extension, SAT- DBailiff, FixMe, DebtHunter, SATD Detector, eXcomment Bias in analysis Promoting diverse tools DebtViz, A browser extension, FixMe, DebtHunter Incomplete coverage of SATD Enhancing academia-industry collabo- DebtViz, A browser extension, SAT- ration DBailiff, FixMe, DebtHunter, SATD Detector, eXcomment Technological obsolescence Implementing FAIR principles, SATD Detector Open-source practices for SATD tools Sustainability and repro- Open-source practices for SATD tools A browser extension, FixMe, eXcom- ducibility ment Table 2 further elaborates on these challenges by mapping anti-patterns to specific practical ac- tions and the tools affected. To combat reliability of findings and bias in analysis, the promotion of diverse tools is emphasized. Tools such as DebtViz, SATDBailiff, FixMe, DebtHunter, SATD Detector, eXcomment, and browser extensions are highlighted as solutions, as leveraging multiple tools reduces inconsistencies and minimizes false positives and negatives. For incomplete coverage of SATD, fostering academia-industry collaboration is proposed to align tool development with real-world needs and ensure comprehensive detection. Tools such as DebtViz, SATDBailiff, FixMe, DebtHunter, SATD Detector, and browser extensions can benefit from this collaboration, resulting in improved validation and effective- ness. To address technological obsolescence, the implementation of FAIR principles and open-source practices is recommended, particularly targeting tools like the SATD Detector. Finally, sustainability and reproducibility can be ensured through open-source practices, which encourage community-driven maintenance and accessibility. Tools such as browser extensions, FixMe, and eXcomment are cited as examples that can benefit from these practices, promoting longevity and reproducibility in SATD detection. 6. Conclusion The analysis of SATD detection tools reveals several challenges that hinder their practical adoption and usefulness in research and practice. Many tools suffer from accessibility issues, such as outdated or broken links, reducing their availability for developers and making empirical research more difficult. To address these issues, the SATD community should adopt strategies that ensure the long-term sustainability and usability of these tools. Applying the FAIR principles can help maintain the relevance of SATD tools and foster stronger collaboration between academia and industry. Such cooperation ensures that tools evolve alongside real-world software development needs and remain accessible for ongoing research. In addition, open-source practices should be adopted to encourage community-driven maintenance and development, making tools publicly available and encouraging collaborative contributions. Further- more, regular benchmarking and real-world case studies should be implemented to ensure the relevance and reliability of these tools in diverse settings. By focusing on sustainability, accessibility, and collaboration, the SATD detection community can create a more diverse and robust ecosystem of tools, better suited to managing technical debt. This will enhance software quality in both academic research and industrial practice, ensuring that SATD remains manageable as software systems grow in complexity. References [1] W. Cunningham, The wycash portfolio management system, ACM Sigplan Oops Messenger 4 (1992) 29–30. [2] A. Potdar, E. Shihab, An exploratory study on self-admitted technical debt, in: 2014 IEEE International Conference on Software Maintenance and Evolution, IEEE, 2014, pp. 91–100. [3] E. d. S. Maldonado, E. Shihab, Detecting and quantifying different types of self-admitted technical debt, in: 2015 IEEE 7Th international workshop on managing technical debt (MTD), IEEE, 2015, pp. 9–15. [4] E. A. AlOmar, B. Christians, M. Busho, A. H. AlKhalid, A. Ouni, C. Newman, M. W. Mkaouer, Satdbailiff-mining and tracking self-admitted technical debt, Science of Computer Programming 213 (2022) 102693. [5] I. Sala, A. Tommasel, F. Arcelli Fontana, Debthunter: A machine learning-based approach for detecting self-admitted technical debt, in: Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering, 2021, pp. 278–283. [6] E. D. S. Maldonado, R. Abdalkareem, E. Shihab, A. Serebrenik, An empirical study on the removal of self-admitted technical debt, in: 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, 2017, pp. 238–248. [7] E. Sutoyo, A. Capiluppi, Self-admitted technical debt detection approaches: A decade systematic review, arXiv preprint arXiv:2312.15020 (2024). [8] Graham Lee, Sebastian Bacon, Ian Bush, L. Fortunato, D. Gavaghan, T. Lestang, Caroline Morton, M. Robinson, P. Rocca-Serra, Susanna-Assunta Sansone, Helena Webb, Barely sufficient practices in scientific computing, Patterns (2021). [9] Henrik Larsson, Erik Lindqvist, R. Torkar, Outliers and Replication in Software Engineering, Asia-Pacific Software Engineering Conference (2014). [10] J. Brito, Jun Z Li, J. H. Moore, C. Greene, Nicole A Nogoy, L. Garmire, S. Mangul, Recommendations to enhance rigor and reproducibility in biomedical research, GigaScience (2020). [11] L. Madeyski, B. Kitchenham, Would wider adoption of reproducible research be beneficial for empirical software engineering research?, Journal of Intelligent & Fuzzy Systems (2017). [12] R. D. Peng, Reproducible Research in Computational Science, Science 334 (2011) 1226–1227. [13] Zhongxin Liu, Qiao Huang, Xin Xia, Emad Shihab, D. Lo, Shanping Li, Satd Detector: A Text- Mining-Based Self-Admitted Technical Debt Detection Tool, 2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion) (2018). [14] Y. Li, M. Soliman, P. Avgeriou, Automatic identification of self-admitted technical debt from four different sources, Empirical Software Engineering 28 (2023). [15] Q. Huang, E. Shihab, X. Xia, D. Lo, S. Li, Identifying self-admitted technical debt in open source projects using text mining, Empirical Software Engineering 23 (2017) 418–451. [16] F. Zampetti, C. Noiseux, G. Antoniol, F. Khomh, M. Di Penta, Recommending when Design Technical Debt Should be Self-Admitted, in: 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), volume 3, IEEE, 2017, pp. 216–226. [17] R. Maipradit, C. Treude, H. Hata, K. Matsumoto, Wait for it: identifying “On-Hold” self-admitted technical debt, Empirical Software Engineering 25 (2020) 3770–3798. [18] L. Pavlič, T. Hliš, M. Heričko, T. Beranič, The Gap between the Admitted and the Measured Technical Debt: An Empirical Study, Applied Sciences 12 (2022) 7482. [19] J. Y. Khan, G. Uddin, Automatic Detection and Analysis of Technical Debts in Peer-Review Documentation of R Packages, in: Proceedings - 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2022, Institute of Electrical and Electronics Engineers, 2022, pp. 765–776. doi:10.1109/SANER53432.2022.00094. [20] S. Phaithoon, S. Wongnil, P. Pussawong, M. Choetkiertikul, C. Ragkhitwetsagul, T. Sunetnanta, R. Maipradit, H. Hata, K. Matsumoto, Fixme: A github bot for detecting and monitoring on-hold self-admitted technical debt, in: 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), IEEE, 2021, pp. 1257–1261. [21] M. A. D. F. Farias, M. G. D. M. Neto, A. B. D. Silva, R. O. Spinola, A Contextualized Vocabulary Model for identifying technical debt on code comments, in: 2015 IEEE 7th International Workshop on Managing Technical Debt, MTD 2015 - Proceedings, Institute of Electrical and Electronics Engineers, IEEE, 2015, pp. 25–32. doi:10.1109/MTD.2015.7332621. [22] M. Farias, T. S. Mendes, M. G. Mendonça, R. O. Spínola, On comment patterns that are good indicators of the presence of self-admitted technical debt and those that lead to false positive items., in: AMCIS, 2021. [23] J. Costa, P. Meirelles, C. Chavez, On the sustainability of academic software, in: Proceedings of the XXXII Brazilian Symposium on Software Engineering, volume 6, ACM, 2018, pp. 202–207. [24] A. Deo, S. K. Dash, G. Suarez-Tangil, V. Vovk, L. Cavallaro, Prescience, in: Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, ACM, 2016, pp. 71–82. [25] A. Algaith, P. Nunes, F. Jose, I. Gashi, M. Vieira, Finding SQL Injection and Cross Site Scripting Vulnerabilities with Diverse Static Analysis Tools, in: 2018 14th European Dependable Computing Conference (EDCC), IEEE, 2018, pp. 57–64. [26] B. Patel, S. Soundarajan, H. Ménager, Z. Hu, Making Biomedical Research Software FAIR: Action- able Step-by-step Guidelines with a User-support Tool (2022). [27] R. Sonabend, H. Gruson, L. Wolansky, A. Kiragga, D. S. Katz, Fair-USE4OS: Guidelines for creating impactful open-source software, PLOS Computational Biology 20 (2024) e1012045. [28] W. Hasselbring, L. Carr, S. Hettrick, H. Packer, T. Tiropanis, From FAIR research data toward FAIR and open research software, it - Information Technology 62 (2020) 39–47. [29] G. Audemard, L. Paulevé, L. Simon, SAT Heritage: A Community-Driven Effort for Archiving, Building and Running More Than Thousand SAT Solvers, Springer International Publishing, 2020, pp. 107–113. [30] E. M. del Pico, J. L. Gelpi, S. Capella-Gutiérrez, Fairsoft-a practical implementation of fair principles for research software, bioRxiv (2022) 2022–05. [31] D. Spinellis, Version control systems, IEEE software 22 (2005) 108–109. [32] V. Stirbu, T. Mikkonen, Introducing traceability in github for medical software development, in: Product-Focused Software Process Improvement: 22nd International Conference, PROFES 2021, Turin, Italy, November 26, 2021, Proceedings 22, Springer, 2021, pp. 152–164. [33] M. Klein, L. Balakireva, On the persistence of persistent identifiers of the scholarly web, in: Digital Libraries for Open Knowledge: 24th International Conference on Theory and Practice of Digital Libraries, TPDL 2020, Lyon, France, August 25–27, 2020, Proceedings 24, Springer, 2020, pp. 102–115. [34] R. C. Jiménez, M. Kuzak, M. Alhamdoosh, M. Barker, B. Batut, M. Borg, S. Capella-Gutierrez, N. Chue Hong, M. Cook, M. Corpas, M. Flannery, L. Garcia, J. L. Gelpí, S. Gladman, C. Goble, M. González Ferreiro, A. Gonzalez-Beltran, P. C. Griffin, B. Grüning, J. Hagberg, P. Holub, R. Hooft, J. Ison, D. S. Katz, B. Leskošek, F. López Gómez, L. J. Oliveira, D. Mellor, R. Mosbergen, N. Mulder, Y. Perez-Riverol, R. Pergl, H. Pichler, B. Pope, F. Sanz, M. V. Schneider, V. Stodden, R. Suchecki, R. Svobodová Vařeková, H.-A. Talvik, I. Todorov, A. Treloar, S. Tyagi, M. van Gompel, D. Vaughan, A. Via, X. Wang, N. S. Watson-Haigh, S. Crouch, Four simple recommendations to encourage best practices in research software, F1000Research 6 (2017) 876. [35] M. Corpas, D. Mellor, Four simple recommendations to encourage best practices in research software (2019). [36] T. Gamblin, Picking an Open Source License at LLNL: Guidance and Recommendations from the Computing Directorate, Technical Report, Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2021. [37] M. Bikard, K. Vakili, F. Teodoridis, When collaboration bridges institutions: The impact of university–industry collaboration on academic productivity, Organization Science 30 (2019) 426–445. [38] P. M. Duvall, S. Matyas, A. Glover, Continuous integration: improving software quality and reducing risk, Pearson Education, 2007.