<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. Bozkurt, A. Karadeniz, D. Baneres, A. Guerrero-Roldan, and M. Rodriguez, 'Artificial
Intelligence and Reflections from Educational Landscape: A Review of AI Studies in Half a
Century', SUSTAINABILITY</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3390/su13020800</article-id>
      <title-group>
        <article-title>Artificial Intelligence in Higher Education: Ethical Concerns for Students with Disabilities</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oriane Pierrès</string-name>
          <email>oriane.pierres@zhaw.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alireza. Darvishy</string-name>
          <email>alireza.darvishy@zhaw.ch</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Christen</string-name>
          <email>markus.christen@ibme.uzh.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Zurich</institution>
          ,
          <addr-line>IBME, Winterthurerstrasse 30, 8006 Zurich</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Zurich University for Applied Sciences</institution>
          ,
          <addr-line>Steinberggasse 13, 8400 Winterthur</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>13</volume>
      <issue>2</issue>
      <abstract>
        <p>Many literature reviews on artificial intelligence (AI) in higher education or in education in general have focused on the different applications of AI in this domain, the AI techniques used, and the benefits/risks of the use of AI. One of the greatest potentials of AI is to personalise higher education to the needs of students and offer timely feedback. This could benefit students with disabilities tremendously if their needs are also considered in the development of new AI educational technologies (Edtech). However, current reviews fail to address the perspective of students with disabilities. This perspective is essential because AI is likely to bring several ethical concerns for people with disabilities. For instance, AI can treat people with disabilities as outliers in the data and end up discriminating against them. For that reason, two questions were raised: To what extent are ethical concerns relevant for students with disabilities considered in articles presenting AI applications assessing students in higher education? What are the potential risks of using AI that assess students with disabilities in higher education? This extended abstract presents summarised results of a scoping review that will be published in a journal. The goal of this article is to start a discussion within the AI ethics community to raise awareness about the issues that students with disabilities may face and to collaboratively explore solutions. Results suggest that there is a lack of ethical reflection on AI technologies and an absence of discussion and inclusion of people with disabilities. Moreover, risks associated with utilising AI for students with disabilities relate to the choice of data, reliance on simplistic classification, face monitoring, and the low involvement of students.</p>
      </abstract>
      <kwd-group>
        <kwd>1 AI Ethics</kwd>
        <kwd>AI Edtech</kwd>
        <kwd>Accessibility</kwd>
        <kwd>Students with Disabilities</kwd>
        <kwd>Higher Education</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Many researchers have systematically reviewed the literature on the use of artificial intelligence (AI)
in higher education or education in general [1]–[9]. According to Bartneck and others who discussed
three prominent definitions of AI, “AI involves the study, design and building of intelligent agents that
can achieve goals” (p.8) [10]. Several reviews have identified different applications of AI in education
[1]–[7]. Existing reviews also identify the benefits and risks of using AI in higher education. On the
one hand, Ouyang et al. [5] suggest that AI enables administrative staff and lecturers to take informed
decisions based on predictions of student performance, learning status or satisfaction, AI also provides
students with learning recommendations, and AI improves academic performance as well as online
engagement and participation. On the other hand, researchers point out to several risks and ethical
concerns in using AI. Authors stress the lack of pedagogical perspective as research mostly focusses on
the technical aspects of the development of AI applications [2], [5], [6], [11]. Others also warn against
biases in AI due to a lack of data diversity [5], [6], [8], [12]. There are also concerns about the privacy
of students and security issues [6], [7], [9], [12].</p>
      <p>With AI, higher education is expected to be more flexible and personalised, and thus more accessible
to students with disabilities. In 2006, the signatories of the United Nations Conventions on the Rights
of Persons with Disabilities pledged their responsibility in guaranteeing that people with disabilities
have access to tertiary education [13]. From a socio-medical perspective, a disability is the result of the
“interaction between individuals with a health condition […] and personal and environmental factors”
[14]. There is an increasing recognition that there is no one-size-fits-all approach to education.
According to the Universal Design for Learning approach, every learner has a unique way to understand
information, to be motivated, and to express their knowledge [15]. The flexibility and personalisation
that AI provides is precisely what could help people with disabilities study at the tertiary level. For
instance, students with impairments may benefit from a flexible course schedule with differentiated
learning pace because attending courses requires a lot of effort and concentration [16]. However, this
can only happen if the needs of people with disabilities are considered from the beginning. Heiman and
others [17] explain that the accessibility of technology should not be an afterthought because it makes
the adaptations more expensive, and it takes more time for people with disabilities to get the help they
need.</p>
      <p>However, existing literature reviews fail to make a deeper analysis on the impact AI could have for
students with disabilities. Very few papers mention people with impairments in the reviews [2], [9].
Ensuring algorithmic fairness for people with disabilities is challenging because disabilities are diverse,
multiple, and diverse, and are as a result often treated as outliers in the data used to train algorithms
[18]. AI fairness means that the AI does not discriminate negatively or is not biased against a certain
group or individual [18]. Guo et al. [19] identified several risks for the fair use of AI for people with
disabilities. According to Morris [20], there are seven ethical concerns for AI and accessibility:
inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability. AI Edtech
raises issues of inclusivity as it may not work properly for people with disabilities and may thus exclude
them from using a service or product [20]. For instance, text correction is less likely to work for people
with dyslexia [19]. Despite specific concerns regarding the use of AI and disability, there is a lack of
research on AI fairness that considers the perspective of people with disabilities [18], [21].</p>
      <p>To address this research gap and ethical concerns, the authors followed the methods for a PRISMA
scoping review [22] and looked for all literature presenting AI applications in higher education that
assess students to form or inform a decision taken by lecturers or administrative staff at the university.
The following two research questions were raised:
• Research Question 1: To what extent are ethical concerns considered in articles presenting</p>
      <p>AI applications assessing students in higher education?
• Research Question 2: What are the potential risks of using AI that assesses students with
disabilities in higher education?</p>
    </sec>
    <sec id="sec-2">
      <title>2. Results</title>
      <p>In the review, 57 articles presenting an AI-based system that had a clear application in higher
education and analysed students to inform or take decisions were selected and analysed.</p>
      <sec id="sec-2-1">
        <title>2.1. To what extent are ethical concerns considered in articles presenting AI applications assessing students in higher education?</title>
        <p>More than half of the articles did not address any ethical aspects. When ethical considerations where
mentioned, authors primarily focused on privacy concerns, biases, and transparency. Still, these
mentions were typically brief, indicating that the authors were aware of the issues and had considered
them during the design of the AI Edtech. Additionally, only three out of 57 articles mentioned students
with disabilities. Therefore, it is critical to conduct further investigations into the impact of AI usage in
higher education for students with disabilities. Failing to do so could lead to a missed opportunities to
provide greater accessibility in tertiary education and potentially to increased discrimination towards
this group.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. What are the potential risks of using AI that assess students with disabilities in higher education?</title>
        <p>To examine the risks of using AI Edtech that assesses students with disabilities, the following
information were extracted from the articles: 1) the decision type the application informs or takes, 2)
the decision-maker, 3) the input data, and 4) how the application was evaluated. Decision type is
important because not all decisions are equally critical; some will impact individuals’ life significantly
whereas others will not [23], [24]. Regarding decision-maker, the degree of user control influences how
algorithmic bias affects users, as they can be empowered to reject or refuse the decision of the AI [24].
The type of input data was extracted because data are often a source of AI bias [21]. Finally, how
applications are evaluated matter because of the risk of evaluation bias that arises from data that is not
representative of the population [25], which is a particular concern for people with disabilities who are
often underrepresented in datasets [20], [26].</p>
        <p>In total, eight discrimination risks were identified and were fully described in the journal article
submitted to the International Journal of Artificial Intelligence in Education. For this extended abstract,
the discrimination risks have been summarized into four categories.</p>
        <p>First, the choice of data is critical because it can correlate to disability status and can lead to issues
of bias and inclusivity. For instance, the use of log data was quite common in the literature to predict
at-risk students or to build intelligent tutoring systems. However, the level of accessibility of platforms
can affect log data of disabled students [27].</p>
        <p>Second, at-risk predictions often relied on simplistic classifications that did not provide further
information on why a student was expected to fail in a course. Students with disabilities may require
specific interventions that cannot be inferred from such simple systems. For instance, it is possible that
a student is predicted to fail, because the prediction is based on log data from an inaccessible platform.
In this case, the intervention would not be about how the student learns, but about making the platform
accessible. This finding undermines the claim that AI personalises higher education.</p>
        <p>Third, a few articles employed facial recognition to monitor students during exams or their attention
during lecture. While monitoring faces raise general privacy concerns, there are additional concerns for
students with disabilities. Students with anxiety may feel under greater pressure when subjected to
monitoring. Moreover, facial recognition may not work with students with unusual facial features or
those wearing accessories such as sunglasses [19].</p>
        <p>Finally, the presented AI-based applications rarely involved students and merely informed them of
decisions. Empowering students with decision-making power and utilising AI as an assistive tool can
be highly beneficial for the inclusion of students with disabilities.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusion</title>
      <p>This extended abstract summarises the findings from a scoping review investigating the potential
risks of using AI in higher education for students with disabilities. In essence, the scoping review
emphasises the need for increased awareness of the ethical risks associated with employing AI in higher
education for students with disabilities. In particular, the authors encourage the research community to
report their efforts to mitigate ethical concerns and to actively involve students with disabilities in the
design and evaluation stages of AI systems.</p>
    </sec>
    <sec id="sec-4">
      <title>4. References</title>
      <p>systematic review of empirical research from 2011 to 2020’, Educ. Inf. Technol., 2022, doi:
10.1007/s10639-022-10925-9.
[6] O. Zawacki-Richter, V. I. Marín, M. Bond, and F. Gouverneur, ‘Systematic review of research
on artificial intelligence applications in higher education – where are the educators?’, Int. J.</p>
      <p>Educ. Technol. High. Educ., vol. 16, no. 1, 2019, doi: 10.1186/s41239-019-0171-0.
[7] X. Zhai et al., ‘A Review of Artificial Intelligence (AI) in Education from 2010 to 2020’,</p>
      <p>COMPLEXITY, vol. 2021, Apr. 2021, doi: 10.1155/2021/8812542.
[8] E. Okewu, P. Adewole, S. Misra, R. Maskeliunas, and R. Damasevicius, ‘Artificial Neural
Networks for Educational Data Mining in Higher Education: A Systematic Literature Review’,
Appl. Artif. Intell., vol. 35, no. 13, pp. 983–1021, Nov. 2021, doi:
10.1080/08839514.2021.1922847.
[9] A. Nigam, R. Pasricha, T. Singh, and P. Churi, ‘A Systematic Review on AI-based Proctoring
Systems: Past, Present and Future’, Educ. Inf. Technol., vol. 26, no. 5, pp. 6421–6445, Sep.
2021, doi: 10.1007/s10639-021-10597-x.
[10] C. Bartneck, C. Lütge, A. Wagner, and S. Welsh, An Introduction to Ethics in Robotics and AI.
in SpringerBriefs in Ethics. Cham: Springer International Publishing, 2021. doi:
10.1007/978-3030-51110-4.
[11] Y. Cui, F. Chen, A. Shiri, and Y. Fan, ‘Predictive analytic models of student success in higher
education: A review of methodology’, Inf. Learn. Sci., vol. 120, no. 3–4, pp. 208–227, 2019,
doi: 10.1108/ILS-10-2018-0104.
[12] M. Hooda, C. Rana, O. Dahiya, A. Rizwan, and M. S. Hossain, ‘Artificial Intelligence for
Assessment and Feedback to Enhance Student Success in Higher Education’, Math. Probl. Eng.,
vol. 2022, pp. 1–19, May 2022, doi: 10.1155/2022/5215722.
[13] United Nations, ‘Conventions on the Rights of Persons with Disabilities pledged, Article 24 –</p>
      <p>Education’. Dec. 13, 2006.
[14] World Health Organisation, ‘Disability and Health’, Nov. 04, 2021.
https://www.who.int/newsroom/fact-sheets/detail/disability-andhealth#:~:text=Disability%20refers%20to%20the%20interaction,%2C%20and%20limited%20s
ocial%20supports).
[15] CAST, ‘Universal Design for Learning Guidelines version 2.2’, 2018.</p>
      <p>http://udlguidelines.cast.org/
[16] J. M. García-González, S. Gutiérrez Gómez-Calcerrada, E. Solera Hernández, and S.
RíosAguilar, ‘Barriers in higher education: perceptions and discourse analysis of students with
disabilities in Spain’, Disabil. Soc., vol. 36, no. 4, pp. 579–595, Apr. 2021, doi:
10.1080/09687599.2020.1749565.
[17] T. Heiman, T. Coughlan, H. Rangin, and M. Deimann, ‘New Designs or New Practices?
Multiple Perspectives on the ICT and Accessibility Conundrum’, in Improving Accessible
Digital Practices in Higher Education: Challenges and New Practices for Inclusion, J. Seale,
Ed., Cham: Springer International Publishing, 2020, pp. 99–115. [Online]. Available:
https://doi.org/10.1007/978-3-030-37125-8_5
[18] S. Trewin, ‘AI fairness for people with disabilities: Point of view’, ArXiv Prepr.</p>
      <p>ArXiv181110670, 2018.
[19] A. Guo, E. Kamar, J. W. Vaughan, H. Wallach, and M. R. Morris, ‘Toward fairness in AI for
people with disabilities SBG@ a research roadmap’, ACM SIGACCESS Access. Comput., no.
125, pp. 1–1, 2020.
[20] M. R. Morris, ‘AI and Accessibility’, Commun ACM, vol. 63, no. 6, pp. 35–37, May 2020, doi:
10.1145/3356727.
[21] R. S. Baker and A. Hawn, ‘Algorithmic Bias in Education’, Int. J. Artif. Intell. Educ., Nov.</p>
      <p>2021, doi: 10.1007/s40593-021-00285-9.
[22] A. C. Tricco et al., ‘PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and
Explanation’, Ann. Intern. Med., vol. 169, no. 7, pp. 467–473, Oct. 2018, doi:
10.7326/M180850.
[23] M. Brkan, ‘Do algorithms rule the world? Algorithmic decision-making and data protection in
the framework of the GDPR and beyond’, Int. J. Law Inf. Technol., vol. 27, no. 2, pp. 91–121,
Jun. 2019, doi: 10.1093/ijlit/eay017.
[24] N. Kordzadeh and M. Ghasemaghaei, ‘Algorithmic bias: review, synthesis, and future research
directions’, Eur. J. Inf. Syst., pp. 1–22, Jun. 2021, doi: 10.1080/0960085X.2021.1927212.
[25] H. Suresh and J. Guttag, ‘A Framework for Understanding Sources of Harm throughout the
Machine Learning Life Cycle’, in Equity and Access in Algorithms, Mechanisms, and
Optimization, -- NY USA: ACM, Oct. 2021, pp. 1–9. doi: 10.1145/3465416.3483305.
[26] W. Chen, ‘Learning Analytics for inclusive higher education’, presented at the Proceedings of
the 28th International Conference on Computers in Education. Asia-Pacific Society for
Computers in Education, 2020.
[27] C. Baek and S. J. Aguilar, ‘Past, present, and future directions of learning analytics research for
students with disabilities’, J. Res. Technol. Educ., pp. 1–16, May 2022, doi:
10.1080/15391523.2022.2067796.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>