=Paper= {{Paper |id=Vol-3302/abstract1 |storemode=property |title=Trustworthy AI in Medicine and Healthcare - abstract |pdfUrl=https://ceur-ws.org/Vol-3302/abstract1.pdf |volume=Vol-3302 |authors=Anastasiya Doroshenko |dblpUrl=https://dblp.org/rec/conf/iddm/Doroshenko22 }} ==Trustworthy AI in Medicine and Healthcare - abstract== https://ceur-ws.org/Vol-3302/abstract1.pdf
Trustworthy AI in Medicine and Healthcare
Anastasiya Doroshenkoa
a
    Lviv Polytechnic National University, S. Bandery 12, Lviv, 79013, Ukraine


                 Abstract
                 The problem of reliability and the degree of trust in artificial intelligence (AI) systems is now
                 particularly acute. Today AI is implemented in various areas of human life: industrial
                 production, defense industry, economics, education, and medicine. In this regard, reasonable
                 questions are: can we trust AI; to what extent the decisions made by AI are justified; who is
                 responsible for the mistakes made by the AI system, which can lead to not only financial but
                 also human losses?
                 The problem of trustworthy AI is especially relevant for medicine and healthcare. Today,
                 thanks to the evolution of AI, personalized medical applications have moved from solving
                 diagnostic problems to therapy. If the first generation of medical technologies processed only
                 structured data, today's AI-based medical systems are built on big data platforms and process
                 unstructured data. The next generation of medical technology will focus on working with
                 Edge-of-Things data – the huge amount of streaming data generated by IoT platforms, cloud
                 systems, and edge computing platforms. These medical systems will be able to be used for
                 personalized healthcare through smart healthcare applications on edge devices such as smart
                 sensors and wearables. They will process this data by interactive virtual agents to report on
                 the patient's health status, as well as specific recommendations for his treatment.
                 Due to the widespread use of artificial intelligence technologies in the healthcare sector, it
                 has become necessary to legally legitimize the requirements for such systems. This is
                 necessary in order to accelerate the spread of such systems by increasing the credibility of
                 both patients and doctors. To sustain the trustworthiness of AI was developed to two sets of
                 popular principles have been outlined by the Organization for Economic Co-operation and
                 Development (OECD) and European Commission’s AI High-Level Expert Group (HLEG).
                 The OECD defines such five principles for implementing trustworthy AI: inclusive growth,
                 sustainable development, and well-being; human-centered values and fairness; transparency
                 and explainability; robustness, security, and safety; accountability. The HLEG developed
                 Ethics Guidelines for Trustworthy Artificial Intelligence (April 2019). According to the
                 Guidelines, trustworthy AI should be: lawful, ethical, and robust. Also, the Guidelines put
                 forward a set of 7 key requirements that AI systems should meet in order to be deemed
                 trustworthy.
                 This talk discusses the current situation with trustworthy AI systems in medicine and
                 healthcare, compares the approaches and legal requirements to the developing AI systems in
                 different countries, and also considers the documents developed by the European
                 Commission and HLEG on AI. Recommendations for developing medical AI systems
                 completed to the Directive of The European Parliament and of The Council on Adapting
                 Non-Contractual Civil Liability Rules to Artificial Intelligence (September 2022) and The
                 Assessment List For Trustworthy Artificial Intelligence will be considered.
                 Keywords 1
                 Ethical AI, Trustworthy AI, Digital Healthcare Systems, Smart Healthcare and Medicine,
                 Ethics Guidelines for Trustworthy Artificial Intelligence.

                 This work was realized within the framework of the program Erasmus+ Jean Monnet Module
                 “Trustworthy artificial intelligence: the European approach”' (101085626 — TrustAI).

IDDM-2022: 5th International Conference on Informatics & Data-Driven Medicine, November 18–20, 2022, Lyon, France
EMAIL: anastasia.doroshenko@gmail.com
ORCID: 0000-0002-7214-5108
            ©️ 2022 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)