=Paper= {{Paper |id=Vol-2505/paper05 |storemode=property |title=Ethics of AI Technologies and Organizational Roles: Who Is Accountable for the Ethical Conduct? |pdfUrl=https://ceur-ws.org/Vol-2505/paper05.pdf |volume=Vol-2505 |authors=Vaiste Juho |dblpUrl=https://dblp.org/rec/conf/tethics/Vaiste19 }} ==Ethics of AI Technologies and Organizational Roles: Who Is Accountable for the Ethical Conduct?== https://ceur-ws.org/Vol-2505/paper05.pdf
    Ethics of AI Technologies and Organizational Roles:
       Who Is Accountable for the Ethical Conduct?

                                 Vaiste Juho[0000-0002-4353-6388]

                                  University of Turku, Finland
                                   juho.vaiste@utu.fi



       Abstract. Artificial intelligence (AI) is recognized to have the possibility to
       transform many fields in our society and business environment. In addition to
       discovering the positive impacts, multidisciplinary research about ethical con-
       cerns related to artificial intelligence should take place.
           The rise of AI ethics sets new questions for management studies. New ethical
       demands drive organizations to introduce new practices, routines, and roles. In
       recent years the AI ethics community has focused on the principle-level work
       resulting in manifold documentation. It is a crucial task for management studies
       to convert these findings into organizational practices.
           A part of management studies’ task is to answer who should take responsibil-
       ity for the ethical queries. Role-specific, systematic and agency-based approaches
       offer different views on organizational roles and practices to this ethical chal-
       lenge. Arguing from the role-specific approach, the emerging roles of AI ethics
       could build-up to the boundaries between programmers, designers, and compli-
       ance personnel.
           The article presents a limited interview data to support future research. The
       interview results are compared to the theoretical background of role-theoretical
       view of organizational roles, and the earlier literature regarding ethics manage-
       ment. Additionally, the article discusses the three categories proposed by Wilson
       et al. (2017) and reflects those with the theoretical insights and the interview data.

       Keywords: Ethics of artificial intelligence, organizational roles, corporate so-
       cial responsibility, responsible AI


1      Introduction

Artificial intelligence (AI) is recognized to have the possibility to transform many fields
in our society and business environment (Stanford, 2016). In addition to discovering
the possible positive impacts, multidisciplinary research about ethical concerns related
to artificial intelligence should take place (Russell, Dewey & Tegmark, 2016). Even
though few ethical questions raised by the AI are familiar from the tradition of infor-
mation technology ethics, especially from the famous PAPA model (Mason, 1986), all
the questions are presented in a novel light and with prior importance.
   The rise of AI ethics sets new questions for management studies. New ethical de-
mands drive organizations to introduce new practices, routines, and roles. The last few

Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).
40


years the AI ethics community has focused on the principle-level work resulting mani-
fold and primary documentation, including IEEE Ethical Aligned Design (2018), ACM
Code of Ethics update (2018), AI Now Institute’s report (2018), An Ethical Framework
for a Good AI Society (Floridi et al., 2018) and European Union’s High-Level Expert
Group´s “Ethics guidelines for trustworthy AI” (2019). It is a fundamental task for the
management studies to convert and execute these findings to organizational practices.
    This article presents conceptual ideas regarding organizational roles to answer the
rising demand of ethical conduct. A little literature has been published about the spe-
cific question of the organizational roles disclosed by AI ethics. However, the long
tradition of compliance and business ethics literature, like organizational roles and or-
ganizational citizenship related to the environmental responsibility (Boiral & Paillé,
2012), and the realized managerial proposals of information technology ethics (Jin et
al., 2007) offer starting blocks for outlining the ethical roles for the era of artificial
intelligence. Due to the limited literature tackling the call of AI ethics directly, the ar-
ticle by Wilson et al. (2017) “The jobs that artificial intelligence will create” has been
very impactful.
    The emerging roles and practices of AI ethics could build up to the boundaries be-
tween programmers, designers, and compliance personnel. This article approaches or-
ganizational roles through role theory, and present three more detailed perspectives for
creating roles in an organization.
    Additionally, the paper analyzes the three principal categories and nine sub-roles
proposed by Wilson et al. (2017) and reflects those with the theoretical background and
earlier literature. The analysis is supported by a brief interview data collected from ten
representatives of ICT vendor companies (see References). The paper offers conceptual
insights on questions regarding the integration of AI ethics with organizational roles:
In which part the ethical conduct required for AI solutions can be managed within the
existing roles? To what extent new ethical demands can be integrated into the existing
roles via training and educational change? Are new organizational roles of ethical con-
duct required?
    This article assumes that responsible business, corporate social responsibility (CSR),
and ethical conduct in overall are profitable for businesses. This proposition of stake-
holder theory stays partly debatable (Weitzner & Yuval, 2019) despite the strong evi-
dence to the positive effect of the responsibility (Jin & Drozdenko, 2010). With this
assumption, it should be clear that aligning ethical conduct to all the AI solutions and
practices an organization has, should be a positive action for the organization’s out-
come. Ethics of artificial intelligence is a clear continuum to other themes of responsi-
bility, and a driving force for the changes within business ethics and responsible busi-
ness.


2      Theoretical background

2.1    AI/ML technologies and ethics
Artificial intelligence (AI) refers to different techniques and technologies like machine
learning techniques, recognition technologies, and machine reasoning. Widely used
                                                                                           41


term AI/ML refers to the dominate position of machine learning techniques in applica-
tions which are called AI. As general-purpose technology, AI affects various industries,
application areas, and other fields as for example robotics. AI equips the robots with
capabilities to react, learn, and act based on the events in their environments.
    Definition of AI is still under debate - and often challenged by the computer scien-
tists and engineers - but some baseline specifications of AI will make the societal and
ethical discussion more valuable. The definition of AI by the European High-Level
Expert Group is a good starting point:
    “Artificial intelligence (AI) refers to systems that display intelligent behaviour by
analysing their environment and taking actions – with some degree of autonomy – to
achieve specific goals. AI-based systems can be purely software-based, acting in the
virtual world (e.g. voice assistants, image analysis software, search engines, speech
and face recognition systems) or AI can be embedded in hardware devices (e.g. ad-
vanced robots, autonomous cars, drones or Internet of Things applications).”
    The European Guidelines for Trustworthy Artificial Intelligence offers a good op-
portunity to adapt the presented ethical principles and requirements into more practical
level. The European High Level Expert Group’s report “Ethical Guidelines for Trust-
worthy AI” describes seven high-level principles and ethical categories as the key re-
quirements of ethical AI: Human agency and oversight, Technical robustness and
safety, Privacy and data governance, Transparency, Diversity, non-discrimination and
fairness, Societal and environmental well-being and Accountability.


2.2    Organizational roles for ensuring ethical conduct and responsible
       business
Regarding ethics of AI, part of management studies’ task is to answer who should take
responsibility for the ethical queries and customs. Although the ultimate responsibility
is placed to the management board and compliance department, liability carriers are
likely and should be found in other roles as well. An intriguing puzzler is whether the
ascending ethical demand can be fulfilled by the existing organizational functions via
effective retraining (i.e., Furey & Martin, 2018; Goldsmith & Burton, 2017) or will new
roles of ethics emerge. In either case, integrating ethical thinking into an organization’s
AI practices is required to achieve the full benefits of these technologies.
    In this article, organizational roles are understood in light of the role theory. The role
theory describes organizational roles as a collection of roles, which most likely fits
perfectly to the situation in the ICT companies. The exact roles vary based on the situ-
ation, environment, and organization. The role theory also states that the content of
roles is socially constructed. (Knight & Harland, 2005).
    Based on the earlier literature, the organizational roles related to ethical conduct are
introduced from two perspectives. First, the view of corporate compliance and CSR
literature presents a few specific organizational roles, especially from the field of envi-
ronmental responsibility and management. For example, this view presents roles like
environmental manager, compliance manager and ethics officer. Second, the earlier re-
search in the information technology ethics shows few examples of how the responsible
42


conduct has been organized in the ICT industry. The literature from the perspective of
ICT is limited.
  Regarding the compliance and CSR view, it is unsure if the requirements of ethical
AI can be fulfilled and complied with the current roles like sustainability or compliance
manager. This assumption could be supported primarily by two qualities:
1. The current content of the roles does not fit: fulfilling the requirements of ethical AI
   can demand an in-depth understanding of the technology and technological nature
   of AI ethics
2. The current form of the roles does not fit: aligning AI solutions with ethics might
   require organizational roles with entirely different nature compared to environmen-
   tal manager or compliance officer roles
   The literature about environmental management is relatively developed. For catego-
rizing the field considering organizational roles, three different approaches related to
environmental management can be found:
1. Role-specific approach (ie. environmental managers, sustainability managers) (Or-
   gan et al., 2006; Greenwood et al., 2012)
2. Agency-based approaches emphasizing the impact of every individual of the organ-
   ization (Andersson and Bateman, 2000; Boiral & Paillé, 2012)
3. Systematic approaches (i.e. environmental management systems which shares the
   responsibility systematically between the organization members) (Sroufe, 2003).
   The role-specific approach settles clear; environmental responsibility is managed by
environmental managers or expert within the organization. The view based on the role
theory fits well to the approach, the exact role of environmental manager can vary a lot
in different organization. The history of environmental managers also includes a shift
from the compliance-based conduct to the more responsible and leading role within the
organizations (Greenwood et al., 2012).
   The six main categories of organizational citizenship behavior described in a general
form by Organ et al. (2006) and targeted again to the environmental theme by (Boiral
& Paillé, 2012) are principal elements of the agency-based approach. Helping, sports-
manship, organizational compliance, individual initiative and self-development are the
ways to encourage environmental progress within an organization by an individual. For
the agency approach, the importance of sharing tacit knowledge has been recognized
(Boiral & Paillé, 2012), and the approach also relies on the teamwork and team effort
(Remmen & Lorentzen, 2000).
   Environmental management system refers to the systematic approach in environ-
mental issues: all the organization’s processes and functions, management structures
and roles, and resources are mapped regarding the environmental impact and protection
(Sroufe, 2003). Relating to organizational roles, practical implementation examples can
be found by various institutions (St. Elizabeth Medical Center; 2008; University of
Gothenburg, 2016).
   Based on the earlier literature, the organizational roles and practices attached to the
ethics of information technology are far less developed than in environmental issues.
                                                                                       43


The phenomenon is natural as the roots of the modern environmental concerns go to
the middle 20th century, and the information technology ethics started not until the
1980s. The two main topics rising from the literature are roles of data privacy & security
personnel (Chen & Zhao, 2012; Clearwater & Hughes, 2013) and professionalism
(Bynum & Simon, 2004).
   Data privacy has been one major topic for information technology ethics, starting at
the latest from Mason’s article (1986). Privacy is a permanent topic, also in all the AI
ethics guidelines and reports. From the earlier literature, data privacy has been a rare
topic to be also viewed from the perspective of organizational roles: data privacy offic-
ers are the closest equivalence to the environmental managers. In other cases, privacy,
security, and product safety issues have belonged to the safety officers or compliance
managers. However, these roles might sound compliance-focused, the need for more
pro-active grip for the privacy issues has been recognized (Kleindienst et al., 2017).
   One crucial perspective rising from the tradition of ICT ethics for organizational
roles is professionalism. Various professions rely on robust professional ethics code,
and that is a discussed topic also within the information technology practitioners
(Bynum & Simon, 2004). However, considering the literature, it is far from saying that
professionalism in the ICT industry would be established. The professionalism of the
ICT industry relies on a few public codes (for example, ACM and IEEE), but these
offer just high-level principles for developers’ work.
   A question rising from the professionalism is that if all the ethical issues related to
applying AI technologies are in the responsibility of the developers. In turn, many AI
ethics questions come down to broader societal problems which might require advanced
ethical knowledge. Besides, the relationship between developers and other roles in-
volved in the AI ethics questions remains unclear. An old example from the public
relations industry offers us two main categories of the roles within the industry: Com-
munication Manager Role and Communication Technician Role (Dozier, 1984). A sim-
ilar situation could be the case in the information technology industry, between project
managers and developers. Another example from the public relations offers valuable
information on how roles possibly change due to technological development: the per-
sonnel in communication technician roles or early practitioners in the industry took the
more substantial role in the social media expertise when these platforms became im-
portant (Lee et al., 2015).


2.3    Roles created by AI – Trainers, Explainers and Sustainers for
       Ethical AI
“Humans in these roles will complement the tasks performed by cognitive technol-
ogy, ensuring that the work of machines is both effective and responsible — that it is
fair, transparent, and auditable.”
Wilson et al. (2017)

Wilson et al. (2017) present three categories of jobs the era of AI technologies would
create: trainers, explainers, and sustainers. Briefly, the categories can be explained as
following: Trainers train AI systems to behave in a desired way, including detecting
44


and understanding the nuances of human behaviour; Explainers would work as an in-
terpreter between technology and decision-makers in the situations when the function
or conclusion of the technological solution should be explained; Sustainers would work
as maintainers of the AI systems, for example ensuring fairness, auditability and safety
of the systems even after an extended period of run time. (Wilson et al., 2017).
   As stated in the quote above, Wilson et al. (2017) give a weighty position for the
roles which are related to keeping AI systems ethical and responsible. Not all the jobs
introduced take part into the ensuring of ethical conduct and practice, but the original
text presents a few examples: Worldview trainer, Transparency analyst, and Automa-
tion ethicist.
   As reflecting the three example job titles to the roles related to environmental con-
cerns, inter-organizational nature of these jobs seems necessary. As the ethical discus-
sion is on a full-time relationship with the surrounding society, inter-organizational
bonds are required. When analyzing the roles presented, it should involve the notion
that the responsibility of inter-organizational relationship is also usually given for an
individual within the organization (Janowicz-Panjaitan & Noorderhaven, 2009).
   All the presented roles – worldview trainer, transparency analyst, and automation
ethicist – clearly land to the role-specific approach in solving the ethical concerns re-
lated to AI technologies. However, there is no evidence that the skills and tasks speci-
fied to these roles could not be utilized through agency-based or systematic approaches.
Ethics officers could have a clear impact to the organizations if relying on the evidence
from the environmental practices (Greenwood et al., 2012), but having a strong team-
work culture and a systematic responsibility allocation would help to ascertain the eth-
ical development of AI solutions.


3      Empirical research

3.1    Method
Ten informants from the IT/AI vendors was interviewed for the research. The inform-
ants possess significant expertise on AI/ML, data science or information technology in
general. All the informants work in the Finnish IT companies which have demonstrated
an active progress and participation to the development and implementations of the
AI/ML technologies. The emphasis is on the companies which can be called as "ven-
dors", referring to the companies which provide their AI/ML development services for
their clients' projects. This definition excludes the product companies which use AI/ML
technology, but for their product or service. However, the informants represent them-
selves and the interviews were told to be anonymous expert interviews.
   The semi-structured interviews were conducted in Spring 2018. The interviews had
a defined structure and questions, but open-form answers were accepted and encour-
aged. The answers are presented as purely descriptive and no further analysis has been
run over the answers, due to the exploratory and preliminary nature of the research. As
part of a larger interview structure, three specific questions related to this article were:
1. Defining the roles of the team in machine learning projects
                                                                                          45


2. Which role is taking the biggest responsibility of ethical risks
3. The need for personnel focusing on ethics


3.2    Interview results
The roles in a team of AI/ML projects
The first question asked to define the operational teams involved in building AI/ML
solutions. The answers varied, but the most defining factor for this was the size of the
company. The core roles of the AI/ML teams are data scientists and AI engineers. These
roles are working with the technical side of AI/ML projects. Their tasks start from the
data collection, evaluation and cleaning and end to the AI/ML development including
model planning, implementation, and the production management of the AI/ML solu-
tions.
   In smaller companies or teams, the roles of data scientist and AI engineer can be
combined. The smaller the company, the closer to the clients are technical experts op-
erating. Based on the interviews, all the AI/ML teams were relatively small, containing
maximum 4-5 technical experts. The results can be different in the future as the AI/ML
projects became larger.
   Some of the larger AI/ML solutions and systems can be based on the work of various
development teams. One team can focus on data analytics and finding new useful in-
sights from the data. Another team can be doing only AI/ML developing work. This
separation of teams can be used to distribute different parts of the project between the
teams. For example, one team can focus on building a machine vision module for the
project, and other is focusing on the voice recognition part.
   The technical group is still just a small part of the whole team. Based on the inform-
ants’ answers, especially the project management is essential in most projects. In
smaller companies, the entire team is smaller, but someone is taking the responsibility
of the project management nevertheless. According the informants, the planning, de-
sign, and management of AI/ML projects might become a more essential part of the
projects than in the earlier ICT projects.


Which role is taking the biggest responsibility of ethical risks
To the second question, the informants offered identical answers: usually, every person
of the project shares the responsibility for identifying, managing, and taking responsi-
bility for the ethical risks. However, this might be partly a fault of the formulation of
the question, the answer is overly obvious. In turn, of course, it is crucial also that every
person involved in the development are aware of and influencing the ethical side of
AI/ML projects.
   From the additional comments for the question is possible to find some diversity for
the answer. One informant emphasized the role of the person working with data over
the others, and that can be a good observation for the question. Data itself might contain
many problems causing ethical concerns. The person who is responsible for choosing,
modifying, cleaning, and inputting the data for the project can have an extensive re-
sponsibility in the sense of ethical conduct.
46


   It was emphasized that in smaller companies, the final responsibility of ethical con-
duct is at the upper management, meaning CEO or equivalent. Though juridically the
situation is the same in larger companies, the respondents from larger companies were
referring more to the responsibility of the development team in their answers. Some
respondents left the most significant responsibility for the project managers.



The need for personnel focusing on ethics
The informants were asked, if they could see new organizational roles emerging to re-
sponse the demand of ethical conduct for AI/ML solutions. Most of the respondents
emphasized the need for integrating the ethical evaluation and thinking to their all op-
erations and personnel over the option of having a named expert focusing on the issues.
   Some of the respondents saw a possible need for technology ethicists and ethics of-
ficers in the future. These answers were attached to the size of the company: in larger
companies, these roles could be possible. The ethicists and ethics officers could be
compared or held equivalent to the lawyers or legal advisors in the companies.
   The pressure for ethical officers or equivalent might come with future regulation.
Already now, the GDPR requires companies with over 250 employees to have a repre-
sentative who has the responsibility for the data protection. The answer to this question
did not define the respondent's opinion of the need for AI/ML ethics. Some of the re-
spondents highlighted the value and urged for better ethical understanding, even not
seeing a reason for separate ethics roles.


4      Discussion

Ensuring ethical conduct and aligning ethics into all AI/ML solutions is necessary for
all organizations. The current challenge is to find out how principle-level guidelines
and ethical codes should turn into practice. This paper argues that defining organiza-
tional roles – whether from role-specific, agency-based, or systematic approach - which
will help to take these guidelines into practice.
    The evidence of agency-based advocation presented in the CSR literature probably
translates into the practices of ethical AI. Individuals promoting ethics of AI within the
organization will surely have an impact at least to the organizational knowledge around
the topic. The brief interview data supports the view that the ICT/AI industry is still at
the phase of knowledge collection and awareness creation. The agency-based approach
fits this observation as it might be enough for many ICT/AI companies to have a few
individuals interested in ethical consideration and active knowledge acquisition around
the topic.
    When looking the from the systematic perspective, the possible shortcomings are
most likely to be found from the operational level (i.e., developing practices, ethical
alignment for project management). The higher level’s needed attention towards AI
ethics contains relatively many familiar issues for the top management. As the work of
                                                                                                47


the AI ethics community has focused on the high-level principles, more empirical anal-
ysis and evidence is needed for a systematic, organization-wide scheme for the ethical
alignment of AI solutions.
   The job titles presented by Wilson et al. (2017) give interesting insights on what kind
of organizational roles could ethical AI need. The job titles might not come true, but
the content described under the roles can be valuable in consideration of the roles nec-
essary for ethical alignment in AI solutions. The open question is that if all the roles (or
their content) can be integrated to one single organizational role, for example ethics
manager, or should the responsibility of the tasks be divided between ethics manager
and ethics technician levels.
   This paper is limited to conceptual insights for future research. More considerable
empirical evidence about the present situation in organizations would help to research
and design better organizational structures, practices, and roles for the ethical align-
ment. However, the current debate of ethical AI has not shown further attention on the
organizational and managerial questions. The AI/ML ethics community should show
interest to this direction and give AI organizations a clearer pathway for ethical align-
ment in practice.


      References

 1. ACM Code of Ethics and Professional Conduct (2018). , retrieved 11.2.2019.
 2. AINow: AI Now 2018 Symposium Report (2018) , retrieved 18.1.2019.
 3. Anderson, L. M., & Bateman, T. S. (2000). Individual environmental initiative: Champion-
    ing natural environmental issues in US business organizations. Academy of Management
    journal, 43(4), 548-570.
 4. Boiral, O., & Paillé, P. (2012). Organizational citizenship behaviour for the environment:
    Measurement and validation. Journal of business ethics, 109(4), 431-445.
 5. Bynum, T. W., & Simon, R. (2004). Computer ethics and professional responsibility. Black-
    well Publishing. 89-98.
 6. Chen, D., & Zhao, H. (2012). Data security and privacy protection issues in cloud compu-
    ting. In 2012 International Conference on Computer Science and Electronics Engineering
    (Vol. 1, pp. 647-651). IEEE.
 7. Clearwater, A.; Hughes, J. (2013). In the beginning an early history of the privacy profes-
    sion. Ohio State Law Journal, 74(6), 897-922.
 8. Dozier, D. M. (1984). Program evaluation and the roles of practitioners. Public Relations
    Review, 10(2), 13-21.
 9. European Union (2019) Ethics guidelines for trustworthy AI. , retrieved 22.4.2019.
10. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Schafer,
    B. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks,
    Principles, and Recommendations. Minds and Machines, 28(4), 689-707.
11. Furey, H., & Martin, F. (2018, April). Introducing ethical thinking about autonomous vehi-
    cles into an AI course. In Thirty-Second AAAI Conference on Artificial Intelligence.
48

12. Goldsmith, J., & Burton, E. (2017, March). Why teaching ethics to AI practitioners is im-
    portant. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.
13. Greenwood, L., Rosenbeck, J., & Scott, J. (2012). The Role of the Environmental Manager
    in Advancing Environmental Sustainability and Social Responsibility in the Organization.
    Journal of Environmental Sustainability, 2(2), 5.
14. IEEE: Ethically Aligned Design, version 2 (2018) , re-
    trieved 8.1.2019.
15. Janowicz-Panjaitan, M., & Noorderhaven, N. G. (2009). Trust, calculation, and interorgan-
    izational learning of tacit knowledge: An organizational roles perspective. Organization
    Studies, 30(10), 1021-1044.
16. Jin, K. G., Drozdenko, R., & Bassett, R. (2007). Information technology professionals’ per-
    ceived organizational values and managerial ethics: An empirical study. Journal of Business
    Ethics, 71(2), 149-159.
17. Jin, K. G., & Drozdenko, R. G. (2010). Relationships among perceived organizational core
    values, corporate social responsibility, ethics, and organizational performance outcomes: An
    empirical study of information technology professionals. Journal of Business Ethics, 92(3),
    341-359.
18. Kleindienst, D.; Nüske, N.; Rau, D.; Schmied, F. (2017): Beyond Mere Compliance — De-
    lighting Customers by Implementing Data Privacy Measures?, in Leimeister, J.M.; Brenner,
    W. (Hrsg.): Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017),
    St. Gallen, S. 807-821
19. Knight, L., & Harland, C. (2005). Managing Supply Networks:: Organizational Roles in
    Network Management. European Management Journal, 23(3), 281-292.
20. Lee, N., Sha, B. L., Dozier, D., & Sargent, P. (2015). The role of new public relations prac-
    titioners as social media experts. Public Relations Review, 41(3), 411-413.
21. Mason, Richard. (1986). Four Ethical Issues of the Information Age. Management Infor-
    mation Systems Quarterly - MISQ. 10. 10.2307/248873.
22. Organ, D. W., Podsakoff, P. M., & MacKenzie, S. B. (2006). Organizational citizenship
    behavior: Its nature, antecedents, and consequences. Thousand Oaks, CA: Sage Publica-
    tions.
23. Remmen, A., & Lorentzen, B. (2000). Employee participation and cleaner technology:
    learning processes in environmental teams. Journal of Cleaner Production, 8(5), 365-373.
24. Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial
    artificial intelligence. Ai Magazine, 36(4), 105-114.
25. Sroufe, R. (2003). Effects of environmental management systems on environmental man-
    agement practices and operations. Production and operations management, 12(3), 416-431.
26. Stanford University - One Hundred Year Study on Artificial Intelligence (2016) Stanford
    University. , retrieved 1.9.2017.
27. St. Elizabeth Medical Center (2008) Environmental Management System.
    , retrieved 8.7.2019.
28. University of Gothenburg (2016) Roles, responsibility and authority within the environmen-
    tal            management           system.             , retrieved 8.7.2019.
29. Weitzner, D., & Deutsch, Y. (2019). Why the time has come to retire instrumental stake-
    holder theory. Academy of Management Review, 44(3), 694-698.
30. Wilson, H. J., Daugherty, P., & Bianzino, N. (2017). The jobs that artificial intelligence will
    create. MIT Sloan Management Review, 58(4), 14.