<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Artificial intelligence and corporate governance</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fouad DAIDAI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Larbi TAMNINE</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Laboratory of Research and Studies in Management, Entrepreneurship and Finance, National School of Commerce and Management of Fez, Sidi Mohamed Ben Abdellah University Fes</institution>
          ,
          <country country="MA">Morocco</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <fpage>19</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The integration of Artificial Intelligence into businesses offers significant opportunities to improve efficiency, decision making and value creation. However, for companies to fully reap the benefits of Artificial Intelligence, it is crucial to put in place strong corporate governance that incorporates ethical principles and social concerns. Such corporate governance requires specific skills, including data management, responsible Artificial Intelligence system design, system security and understanding the ethical and social implications of Artificial Intelligence. Ethical principles in Artificial Intelligence, such as transparency, justice, nonmaleficence, accountability, freedom and autonomy, trust and dignity, are essential to ensure that Artificial Intelligence is used responsibly and ethically, and to foster trust and adoption of Artificial Intelligence in society. This paper aims to explore the issues of corporate governance in AI, the skills needed to integrate Artificial Intelligence into companies, and the ethical principles that need to be taken into account to ensure responsible and ethical use of Artificial Intelligence.</p>
      </abstract>
      <kwd-group>
        <kwd>Corporate governance</kwd>
        <kwd>Artificial intelligence</kwd>
        <kwd>Ethic</kwd>
        <kwd>Decision-making 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Fundamental tracks to integrate artificial intelligence into corporate strategies</title>
      <p>Artificial intelligence (AI) is an emerging technology that offers significant benefits to businesses
in all industries. AI can help companies improve operational efficiency, decision-making, and
profitability. However, integrating AI can also pose challenges, such as the need to invest in
specialized talent and skills and to ensure ethics and transparency in its use. In this section, we
explore the fundamental paths to integrating AI into business strategies and ensuring its effective
and responsible use.</p>
      <p>
        First, it is important to understand the benefits and challenges of integrating AI in-to business
strategies. AI can help businesses automate processes, optimize operations, predict market
trends, and improve customer experience [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, to fully utilize the benefits of AI,
companies must invest in the skills and talent needed to implement the technology. In addition,
companies must consider the risks associated with using AI, such as bias and loss of control. Next,
companies must analyze their needs and goals to determine how AI can help achieve those goals
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For example, a retail company could use AI to improve the accuracy of demand forecasts,
which can help optimize inventory levels and reduce costs. A financial services company could
use AI to automate compliance processes and improve fraud detection. Companies need to
identify areas where AI can add value and develop a strategy to integrate this technology in an
efficient and cost-effective manner [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Therefore, companies must invest in the talent and skills
needed to fully leverage the benefits of AI. Companies can recruit AI specialists and data scientists
or train their existing staff to use the technology. Key AI skills include data analysis, machine
learning, programming, and project management. Companies must also invest in the technology
infrastructure needed to use AI effectively [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In addition, it is critical that companies establish
appropriate policies and procedures to ensure that the use of AI is ethical and transparent.
Companies should be aware of the potential risks associated with AI use, such as bias and loss of
control, and develop policies to ensure that AI use is ethical and responsible. Companies can
develop codes of conduct for AI use.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Artificial intelligence and skills needed</title>
      <p>
        The first issue to address is the professional competencies of business leaders in AI. Business
leaders need to be able to understand the challenges of AI, the opportunities it can offer, but also
the risks and challenges associated with its use. It's also critical that executive committee and
board members understand the implications of AI for their companies and the opportunities and
risks that come with it. AI skills may vary by industry and by the role of business leaders.
Nevertheless, it is important for business leaders to have a general understanding of AI
technologies, their potential applications, and the ethical and social implications of their use. AI
skills can also include the ability to develop effective AI strategies, identify relevant use cases for
AI, build teams of AI specialists, and manage their professional development. This may require
additional training or specialized training for board members [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. It is important to note that the
technology itself is only one part of the issue. The issues are the data needed for AI and the
organization and management of the skills and qualifications of the teams in charge of AI
techniques. Companies must be able to effectively collect and process the data needed for AI to
be effective. AI algorithms learn from data, so the richer and more varied the data, the better the
predictions and decisions made by the AI. However, collecting and processing data is not an end
in itself. Companies must also be able to understand how to use data to solve specific problems
or improve business processes. Data must be interpreted and presented in a way that is useful to
business users, who can use it to make informed decisions. In addition, companies must be able
to attract and retain the talent needed to develop and manage AI techniques [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Executive
committees and boards of directors must also be aware of the potential risks associated with the
use of AI, such as data security and privacy. They must ensure that appropriate policies and
procedures are in place to minimize these risks.
      </p>
      <p>
        Keeping up with the evolving knowledge on an increasingly strategic topic such as AI can be
challenging for a governance body, especially if the skills needed exceed the qualifications of its
members. To this end, the company can use several acts to over-come this challenge, such as
external AI experts who can provide advice and recommendations on decisions to be made [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ];
or the establishment of an AI expert committee that would be composed of members with the
necessary skills and expertise to assess the implications of AI on the company and provide
recommendations to the governance body. The latter can also stay informed about the latest
trends and advances in AI by following the news, reading specialized publications, and attending
conferences on the topic [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        In sum, it is important that the governance body recognizes the importance of AI to the
company and takes steps to ensure that members are sufficiently informed and educated about
the technology. By establishing partnerships, using external experts, and creating a committee of
experts, the governance body can be better equipped to make informed decisions about AI [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Indeed, in recent years, to cope with relentless technological innovation, more and more
companies have introduced new technology- or data-centric leadership positions, such as Chief
Digital Officer (CDO) or Chief Data Officer (CDO) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These executive positions are created to help
companies focus on technology innovation and effective data management, which have become
key components of modern business success. The Chief Digital Officer is responsible for the
company's digital strategy, including digital transformation, IT and communications
management, and creating new digital offerings for customers. The Chief Data Officer, on the
other hand, is responsible for the management and exploitation of the company's data. He or she
oversees the collection, storage, protection and analysis of data, and ensures that data is used
efficiently and ethically [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        From a regulatory perspective, the debate is ongoing globally. Proposed legislation, such as
the EU's AI regulation, has been introduced to address growing concerns about the risks
associated with AI use. In addition, ethical guidelines for the use of AI have been developed by
organizations such as the Organization for Economic Co-operation and Development (OECD) and
the Association for the Advancement of Artificial Intelligence (AAAI). This proposed legislation
and these guidelines emphasize the importance of ensuring that AI is used responsibly,
transparently and ethically. Therefore, it is critical that organizations develop a thorough
understanding of these issues in order to develop appropriate policies and procedures to regulate
the use of AI [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This may include data management practices, transparency practices, human
rights impact reviews, and other measures to minimize the risks associated with AI use.
Ultimately, AI regulation and ethical guidelines should be viewed as ongoing efforts to ensure that
the benefits of AI are harnessed responsibly and the associated risks are minimized.
Organizations must be prepared to adapt as regulations evolve and risks and challenges
associated with AI use emerge.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Artificial intelligence and ethical principles</title>
      <p>
        Ethical principles are critical to ensuring that data is used responsibly and fairly. With the
advent of AI and big data technologies, companies and organizations have more data than ever
before, allowing them to create powerful predictive models that can have significant impacts on
the users of these systems [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The use of these predictive models and AI raises significant ethical
concerns, including privacy, discrimination, and transparency [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. For example, predictive
models based on big data can be used to make critical decisions, such as granting loans, recruiting,
or rating individuals, which can have significant consequences for the individuals involved. It is
therefore crucial that decisions made by AI systems are fair, transparent and justifiable. To ensure
the ethical use of data and AI, it is important to adopt clear ethical principles that guide the use of
these technologies [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Ethical principles in AI pro-mote transparency and accountability in the
development and use of AI, which can help build public trust in the technology. This can also help
companies avoid potential negative consequences of AI use, such as bias and discrimination. To
this end, companies need to be transparent about how data is collected, used and analyzed. As
well as predictive models and AI-based decisions must be fair and unbiased. Not to mention that
companies must respect users' privacy and protect their personal data by involving stakeholders
in the design and implementation of their AI systems [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. It is also important to note that ethical
principles in AI are not static and may evolve over time as technologies change and new ethical
issues emerge. In fact, the development of ethical principles in AI can help drive innovation and
encourage R&amp;D by providing a clear framework to guide technological developments. According
to Issac [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], there are 7 main ethical principles related to the use of data that are presented in
the following table:
      </p>
      <sec id="sec-4-1">
        <title>Refers to the ability to understand how AI systems work, how they make decisions, and how they are used.</title>
      </sec>
      <sec id="sec-4-2">
        <title>Refers to objectivity and non-discrimination in automated</title>
        <p>decision making.</p>
        <p>Refers to the need to ensure that AI systems are not designed or
used to intentionally cause harm to individuals or groups of
individuals.</p>
        <p>Refers to the obligation of AI developers, providers, and users to
consider the consequences of their actions and to ensure that AI
is used responsibly and ethically.</p>
        <p>The ability of individuals to make informed decisions and control
information about themselves when interacting with AI systems.</p>
        <p>The need to ensure that AI systems are reliable, accurate and
transparent. Users must have confidence in the results provided
by AI systems and be able to understand how those results were
produced.</p>
        <p>The need to respect human dignity in the design, development
and use of AI systems</p>
        <p>
          To ensure that ethical principles in AI are built into products and services that use
algorithms, they must be incorporated from the earliest stages of design and development.
Principles such as privacy by design, safety by design, and inclusion by design allow privacy,
safety, and inclusion to be built in early in the AI development process. This could involve only
giving access to data to employees who need to process it as part of their job, and implementing
strong authentication controls, such as the use of complex passwords and biometric recognition
systems, to ensure that only authorized employees can access the data. In addition, to ensure data
confidentiality and integrity, the company could implement measures such as data
pseudonymization, which means that it can replace personal data with anonymous identifiers to
ensure that the data is not linked to a specific individual. Finally, the company can adopt best
security practices such as using cryptography to protect stored and trans-mitted data, and
implementing security protocols to prevent malicious attacks [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          Additionally, to assess the impact of AI systems on privacy, equality, and other ethical principles,
impact assessments can be conducted. These assessments provide an understanding of the
ethical and social implications of AI systems prior to deployment, and allow for steps to be taken
to minimize risks and maximize benefits for individuals and society as a whole. Impact
assessments can include privacy impact assessments, equality impact assessments, and social
impact assessments. They help identify potential risks and impacts of AI systems, identify
appropriate mitigation measures, and ensure that AI systems are used responsibly and ethically.
Ultimately, adopting clear ethical principles is essential to ensure that data and AI are used
responsibly and fairly. This will help build user trust and promote responsible and ethical use of
these technologies. An ethics committee should be formed to establish a range of principles
related to data and AI uses across different industries, business processes, and training programs
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Burström</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parida</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lahti</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wincent</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>AI-enabled business-model innovation and transformation in industrial ecosystems: A framework, model and outline for further research</article-title>
          .
          <source>Journal of Business Research</source>
          ,
          <volume>127</volume>
          ,
          <fpage>85</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Sjödin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parida</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palmié</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wincent</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops</article-title>
          .
          <source>Journal of Business Research</source>
          ,
          <volume>134</volume>
          ,
          <fpage>574</fpage>
          -
          <lpage>587</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Johnson</surname>
            ,
            <given-names>P. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laurell</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ots</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sandström</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Digital innovation and the effects of artificial intelligence on firms' research and development-Automation or augmentation, exploration or exploitation?</article-title>
          .
          <source>Technological Forecasting and Social Change</source>
          ,
          <volume>179</volume>
          ,
          <fpage>121636</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Lipai</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiqiang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mengyuan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Corporate governance reform in the era of artificial intelligence: research overview and prospects based on knowledge graph</article-title>
          .
          <source>Annals of Operations Research</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Wiesmüller</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Forms and Interactions of Relational AI Governance (Doctoral dissertation</article-title>
          , Zeppelin Universität).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Tokmakov</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Artificial Intelligence in Corporate Governance</article-title>
          .
          <source>In Digital Economy and the New Labor Market: Jobs, Competences and Innovative HR Technologies</source>
          (pp.
          <fpage>667</fpage>
          -
          <lpage>674</lpage>
          ). Springer International Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Winfield</surname>
            ,
            <given-names>A. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Evers</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Machine ethics: The design and governance of ethical AI and autonomous systems [scanning the issue]</article-title>
          .
          <source>Proceedings of the IEEE</source>
          ,
          <volume>107</volume>
          (
          <issue>3</issue>
          ),
          <fpage>509</fpage>
          -
          <lpage>517</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>M. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bullock</surname>
            ,
            <given-names>J. B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lecy</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Artificial discretion as a tool of governance: a framework for understanding the impact of artificial intelligence on public administration</article-title>
          .
          <source>Perspectives on Public Management and Governance</source>
          ,
          <volume>2</volume>
          (
          <issue>4</issue>
          ),
          <fpage>301</fpage>
          -
          <lpage>313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Winfield</surname>
            ,
            <given-names>A. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jirotka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Ethical governance is essential to building trust in robotics and artificial intelligence systems</article-title>
          .
          <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
          ,
          <volume>376</volume>
          (
          <issue>2133</issue>
          ),
          <fpage>20180085</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Boddington</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Towards a code of ethics for artificial intelligence</article-title>
          (pp.
          <fpage>27</fpage>
          -
          <lpage>37</lpage>
          ). Cham: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Müller</surname>
            ,
            <given-names>V. C.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Ethics of artificial intelligence and robotics</article-title>
          .
          <source>The Stanford Encyclopedia of Philosophy.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Isaac</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <source>Strategy &amp; Artificial Intelligence</source>
          .
          <article-title>Annales des Mines-Enjeux Numériques</article-title>
          . Stratégie &amp;
          <article-title>Intelligence artificielle</article-title>
          . Annales des
          <string-name>
            <surname>Mines-Enjeux</surname>
            <given-names>Numériques</given-names>
          </string-name>
          , (
          <volume>12</volume>
          ).Author,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Contribution title</article-title>
          .
          <source>In: 9th International Proceedings on Proceedings</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          . Publisher,
          <string-name>
            <surname>Location</surname>
          </string-name>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kar</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Choudhary</surname>
            ,
            <given-names>S. K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>V. K.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>How can artificial intelligence impact sustainability: A systematic literature review</article-title>
          .
          <source>Journal of Cleaner Production</source>
          ,
          <volume>134120</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Van de Poel</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Embedding values in artificial intelligence (AI) systems</article-title>
          .
          <source>Minds and Machines</source>
          ,
          <volume>30</volume>
          (
          <issue>3</issue>
          ),
          <fpage>385</fpage>
          -
          <lpage>409</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>