<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How are Learning Analytics Considering the Societal Values of Fairness, Accountability, Transparency and Human Well-being? -- A Literature Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eyad Hakami</string-name>
          <email>eyad.hakami01@estudiant.upf.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davinia Hernández-Leo</string-name>
          <email>davinia.hernandez-leo@upf.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Pompeu Fabra University</institution>
          ,
          <addr-line>Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <fpage>121</fpage>
      <lpage>141</lpage>
      <abstract>
        <p>The scientific community is currently engaged in global efforts towards a movement that promotes positive human values in the ways we formulate and apply Artificial Intelligence (AI) solutions. As the use of intelligent algorithms and analytics are becoming more involved in how decisions are made in public and private life, the societal values of Fairness, Accountability and Transparency (FAT) and the multidimensional value of human Well-being are being discussed in the context of addressing potential negative and positive impacts of AI. This research paper reviews these four values and their implications in algorithms and investigates their empirical existence in the interdisciplinary field of Learning Analytics (LA). We present and highlight results of a literature review that was conducted across all the editions of the Learning Analytics &amp; Knowledge (LAK) ACM conference proceedings. The findings provide different insights on how these societal and human values are being considered in LA research, tools, applications and ethical frameworks.</p>
      </abstract>
      <kwd-group>
        <kwd>Learning</kwd>
        <kwd>Wellbeing/ Well-being</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Analytics, Fairness, Transparency, Accountability,
The interdisciplinary field of Learning Analytics (LA) borrows methods from Artificial
Intelligence (AI) and goes together with several related areas of research in Educational
Technology to understand and enhance learning. Certainly, Education is one domain
where AI is having an increasingly relevant role and impact. According to the latest
Innovating Pedagogy report [36], “AI-powered learning systems are increasingly being
deployed in schools, colleges and universities, as well as in corporate training around
the world”. The emergence of the LA field has emphasized this trend and raised
discussion about the possible positive and negative futures that can be envisaged
considering the AI potential [27].</p>
      <p>Although AI systems can bring benefits, they also present inherent risks, such as
biases, reduction of human agency due to lack of transparency, decrease of
accountability, etc. Therefore, societal initiatives (e.g. policy makers) and the AI
scientific community are currently engaged in global efforts towards a movement that
promotes positive human values in the ways we formulate and apply AI solutions. As
the use of intelligent algorithms and analytics are becoming more involved in how
decisions are made in public and private life, societal values of Fairness, Accountability
and Transparency (FAT) are being discussed in AI research to address potential
negative and positive impacts of AI. In addition, there are demands and efforts for
considering AI impacts on all aspects of human wellbeing. The IEEE Global Initiative
on Ethics of Autonomous and Intelligent Systems [71] recognizes in a recent report that
prioritizing ethical and responsible AI has become a widespread goal for society, and
the design of intelligent systems should directly address important issues of
transparency, accountability, algorithmic bias, and value systems.</p>
      <p>This research paper reviews these four values and their implications in algorithms
and investigates their presence in the field of Learning Analytics (LA). First, we
introduce the main concepts this paper revolves around, which are Learning Analytics,
and the four values of FAT and Wellbeing. Then we analyze and highlight results of a
literature review that was conducted across all editions of the Learning Analytics &amp;
Knowledge (LAK) ACM conference proceedings. The findings provide different
insights on how these societal and human values are being considered in various LA
tools, applications and ethical frameworks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Research context</title>
      <p>The research context of this paper is framed around a) data involvements in Education
in the form of Learning Analytics that include, but are not limited to, AI methods and
techniques, b) the problem of algorithmic bias as an active example of potential harmful
impacts of using advanced data-driven algorithms, followed by societal concepts of
fairness, accountability, and transparency, from the perspective of their relevance to
preventing bias and ensuring positive AI impacts, and c) the notion of wellbeing as a
multidimensional value, viewed from both perspectives of its theoretical background
and the global efforts of promoting positive wellbeing impacts out of intelligent or
autonomous systems (A/IS).
2.1</p>
      <sec id="sec-2-1">
        <title>Data in Education</title>
        <p>As people and devices are increasingly connected online, society is generating digital
data traces at an extraordinary rate [6]. The term “Big Data” is used to reflect that a
quantitative shift of this magnitude is in fact a qualitative shift demanding new ways of
thinking, and new kinds of human and technical infrastructure [74]. Like many other
sectors, Education has been affected by what commonly known as data revolution.
Collecting reliable performance data for the purpose of tracking learning progress is
being considered an essential feature for improved educational systems.
Learning Analytics. Big and small data approaches are present in Education in the
form of Learning Analytics (LA). Learning Analytics are the processes of collection,
measurement, analysis and reporting of learners’ data for the purpose of understanding
and optimizing learning and the environment in which it occurs [42]. By merging data
techniques and analytics into learning technologies, data-driven tools and algorithms
(e.g. analytics dashboards, recommender systems, intelligent tutoring systems ITS, etc.)
are being designed and developed for understanding and enhancing learning. Arguably,
the concerns of LA applications are driven by not only finding ways to enhance
learning, but also by validating the complex processes used in this direction and
evaluating their wider impacts.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Bias in Data Analytics</title>
        <p>In the case of data collection and analysis, bias is always a major threat. To be biased
means to be prejudiced for or against individuals or groups in ways considered unfair.
Bias in data analytics can occur because the data collected are biased, or the humans
who collected them are biased. The way people collect data can have significant
influence on results that they obtain by analyzing the data [51]. Whereas cognitive
socially-driven bias is an example of the human bias that can affect processes of
collecting and analyzing data, the matter of data selection and generalizability is a
typical example of how a data set can be biased. In addition, when software and AI
methods are involved in data analytics, they may reproduce different forms of bias and
impact a large scale of stakeholders: “algorithmic decision procedures can reproduce
existing patterns of discrimination, inherit the prejudice of prior decision makers, or
simply reflect the widespread biases that persist in society” [12].</p>
        <p>Algorithmic Bias. Algorithms are widely defined as sequences of problem-solving
operations conducted based on sets of rules and instructions to lead to predictable or
desirable outcomes. The term algorithm in the context of this paper refers to the
advanced computational algorithms that have capabilities from AI and machine
learning, allowing them to autonomously make decisions based on statistical models or
decision rules [39]. Even by this meaning, the limits of the term algorithm are
determined by social engagements rather than by technological or material constraints
[21]. Algorithmic bias can occur when algorithms reflect the implicit values of people
who are involved in training the algorithm. Ways that people may be affected by
algorithmic bias include being consciously and unconsciously subjects for forms of
mistreatment (e.g. discriminatory, unfairness), and making different types of decisions
depending on biased algorithmic outcomes.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Fairness, Accountability and Transparency (FAT)</title>
        <p>As the use of algorithms and analytics are increasing and becoming more involved in
multiple decision-making processes, social topics such as fairness, transparency, and
accountability (FAT) are receiving more attention in research from the perspective of
their relevance to preventing bias, and ensuring more ethical algorithmic practices.
Regardless issues of data agency in the deployment of algorithms and analytics, new
questions started to rise in the direction of shaping the ethical framework of
decisionmaking algorithms. The ethical concerns these questions discuss go beyond the actual
work of algorithms, mostly focus on the design and development phases of training an
algorithm: How can fair algorithms be designed and developed? [65], how can we
develop algorithms that are more transparent and accountable? [39], and how can we
produce machine-learning algorithms that autonomously avoid discriminating against
users and automatically provide transparency? [14].</p>
        <p>Algorithmic Fairness. Oxford dictionary defines fairness as the “impartial and just
treatment or behavior without favoritism or discrimination”. As bias, by some means,
is the lack of fairness and the excess of discriminatory, fairness can be understood as
the lack of bias. Algorithmic fairness typically means that algorithmic decisions should
not create discriminatory scenarios, but it is still a complicated topic because the
definition of fairness is largely contextual and subjective [77]. With that in mind, some
scholars and activists have been presenting multitude of technical definitions and
solutions to substantially prevent algorithmic bias and maximize fairness and
transparency.</p>
        <p>Algorithmic Transparency. Transparency is generally considered a means to see the
truth and motives behind actions [4]. In data-driven models and algorithms,
transparency is understood as openness and communication of both the data being
analyzed and the mechanisms underlying the models [40]. Some researchers considered
algorithmic transparency as a way to prevent discrimination; assuming that when
people understand how system works, they are more likely to use the system properly
and trust the designers and developers [39]. Another applicable perspective of
transparency in algorithms is about its ability to provide reasons for an autonomous
decision (e.g. demonstrating reasons behind selections made by a recommender
system). This view proposes that transparency in algorithms follows the sequence of
logic: observation produces insights that create the knowledge required to govern and
hold systems accountable [3]. Yet, full transparency can be significantly harmful.
Therefore, transparency is just one approach toward the ethics and accountability of
algorithms [20].</p>
        <p>Algorithmic Accountability. Accountability refers to processes by which actors
provide reasons to stakeholders for their actions and the actions of their organizations
[63]. While people are responsible for reasoning their actions, algorithmic
accountability concerns are driven by drawing the responsibility circle of algorithmic
decisions. A critical question to define algorithmic accountability is: who is responsible
for actions and decisions of an algorithm created by humans and able to make decisions
without explicit human intervention? One answer on this suggests that accountability
of algorithmic decisions must be derivable from the methods and data used by the
algorithm in order to generate the decision [16]. Thus, accountability in algorithms and
their application begins with the designers and developers of the system that relies on
them [15]. Subsequently, questions that are more specific might be asked in order to
hold algorithms accountable: What are the consequences of using an algorithm for
individuals and societies? How influential are these consequences and how many
people may be affected by? To what extent they are aware of the algorithmic
mechanism that decides for them and drives their decisions and opportunities? What
are the possibilities for algorithmic bias and discrimination to be occurring and leading
to negative impact on the public? How this can be avoided from the early phases of
designing and developing an algorithm? How can it be fixed if it happens during the
implementation of the algorithm? What are the strategies of optimization and the
techniques of intervention?
2.4</p>
      </sec>
      <sec id="sec-2-4">
        <title>Well-being</title>
        <p>For the purposes of aligning ethical considerations to intelligent systems’ design, the
term “well-being” refers to an evaluation of the general quality of life of an individual,
and encompasses the full spectrum of personal, social, and environmental factors that
enhance human life and on which human life depend [71]. Therefore, human wellbeing
should not be perceived as a value of one dimension, and evaluations of wellbeing and
the impacts of A/IS on wellbeing domains must be done with a consideration that
human wellbeing is inseparably linked to the wellbeing of society, economies, and
ecosystems.</p>
        <p>Measuring Well-being. Wellbeing can be reliably measured [48 and 71]. Measuring
wellbeing has become a target for several national and international institutions for the
purpose of better understanding whether, where and how peoples’ life is getting better
(e.g. European Social Survey [24], OECD Better Life Index [48]). Subjective and
objective indicators are being used by such institutions to measure wellbeing of
individuals and societies. While subjective indicators are used to collect data about how
people perceive the state of their wellbeing, objective indicators are used to gather
observable data to measure wellbeing (e.g. incomes, graduation rates, etc.).</p>
        <p>A question that has been recently asked is: what are the potential impacts, positive
and negatives, on the various wellbeing dimensions that include but are not limited to:
feelings, community, culture, education, economy, environment, human settlement,
health, government, psychological wellbeing, satisfaction with life and work. [34].
Value Systems. Whatever their level of autonomy and their capacity to learn and make
decisions, intelligent systems are required to incorporate societal and moral values into
their technological developments at all phases of creating the system: analysis, design,
construction, implementation and evaluation [17]. When creators of AI systems are not
aware that indicators of well-being, including traditional metrics and all other personal
and social indicators that improve quality of life, can provide guidance for their work,
they also miss innovation that can boost well-being and societal value. A representative
illustration of this concept is autonomous vehicles. The discussion is commonly
centered in how they may save lives, but less is argued about their potential to reduce
greenhouse gas emissions or to increase work-life balance or the quality of time. In
education, for example, technology-enhanced learning implies that the presence of
information and communication technologies in education has to be in a framework
distributed for educational value creation at all levels. If we only use metrics of learning
performance when designing and developing educational tools and systems, we may
lose other relevant well-being facets such as effects in socio-emotional aspects,
selfregulation, workload of teachers and learners, the inclusion dimension, etc.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>LAK Literature Review</title>
      <p>
        In this literature review, we investigated empirical existence of the four values of FAT
and Wellbeing in LA research. The search was conducted across all the ten editions of
Learning Analytics &amp; Knowledge (LAK) conference proceed
        <xref ref-type="bibr" rid="ref11 ref3">ings from 2011</xref>
        to 2020.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Method</title>
        <p>This review is limited to LAK conference proceedings, as they, to a certain extent,
reflect the work and results related to LA community. The search aimed to answer the
following questions:
●
●</p>
        <p>To what extent are the concepts of FAT and Well-being existent in LAK
papers?</p>
        <p>How do the LAK papers present and face these concepts?</p>
        <p>A conventional search on the full texts of all LAK companion proceedings (from
LAK11 to LAK20) was conducted by using the following keywords: fairness,
accountab*, transparen*, and wellbeing/ well-being. The textual search covered every
paper published in LAK proceedings according to tables of contents in ACM digital
library. Since these conceptual keywords are relatively new to the field of LA,
everything related to the topic was read, and judgments were made based on textual
analysis aimed to identifying contexts of each keyword.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Quantitative Results</title>
        <p>A total of 49 papers include one or more of the keywords used in the search. As shown
in Table 1, there is a modest increase in the number of papers that tackle the four
concepts across the years (from 2-5 in LAK11-15 to 7 in LAK16-20). The table shows
the detail about the evolution across years in the use of each concept by LAK papers.
In total, over 75% of the papers (37 out of 49) mention the concept of “transparency”.
22% and 18% of the papers include the terms “accountability” and “fairness”,
respectively. And only 7 papers (14%) mention the term “well-being”.
In their endeavor to map ethical and legal basis informing LA practices, [54] cited the
notions of transparency, accountability and fairness among other approaches aiming to
Transparency</p>
        <p>Accountability</p>
        <p>Fairness
Total
2
5
3
2
2
7
7
7
7
7
49
LAK11
LAK12
LAK13
LAK14
LAK15
LAK16
LAK17
LAK18
LAK19
LAK20
LAK All
2
5
3
2
5
7
4
5
4
37
2
1
1
2
3
1
1
11
1
2
2
3
1
9
solve complex data-centered ethical problems. In the range of these ethical approaches,
legal frameworks attempt to make such complexities more palatable by reducing them
to a series of principles. According to [33], the principles of fairness, accountability and
transparency in existing international privacy frameworks can influence the whole
design cycle of LA systems.</p>
        <p>A review of eight existing LA policies for higher education was presented by [72]
and discussed how these policies had tried to address notable challenges in the adoption
of LA. The results of this review showed that all the eight policies had ensured that
processes on student (and staff) data must be transparent. More insights on how data
can be handled transparently were extracted from those eight policies and were
interpreted by [72] as follows: 1) the methods used to collect data have to be disclosed
to the subjects of the data collection; 2) the information about how data will be stored
needs to be provided; 3) Users need to be notified about where their data has travelled
in any integration process between multiple entities and informed about any changes
made to the analytics process.</p>
        <p>In the direction of establishing an ethical literacy for LA, [70] borrowed multiple
frameworks from the field of technical communication to guide discussion on the ethics
of LA “artifacts”: data visualization, interactive dashboards, and LA methodology
(gather, predict, act, measure, and refine). “When guided by such frameworks, an
ethical literacy for LA will answer the question: Who generates these artifacts, how,
and for what purpose, and are these artifacts produced and presented ethically?” [70].
Lack of accountability is a potential consequence of inaccurate or incomplete data that
may be used in LA models. On that, the ethical literacy proposed by [70] described the
need for understanding limitations of data in LA models as a limitation of
accountability.</p>
        <p>FAT in a Personal Code of Ethics. A draft personal code of ethics for LA practitioners
was developed by [38] to consider whether such a code might determine the ethical
responsibilities for individuals within the field of LA. This code considered the
principles of fairness, accountability, and transparency as following:
Fairness. An ethical code of fairness for individuals involved in LA practices could be:
“I will recognize that fairness and justice entitle all persons access to, and benefit from,
the contributions of education and to equal quality in the processes, procedures and
services being conducted through the use of data”.</p>
        <p>Accountability. Although this personal code of ethics included parts that may define
personal accountability, the authors concluded that there is currently no way in which
individuals can be held accountable to any code. Given the scale and complexity of
institutional LA systems, “it may be impossible to trace an individual’s actions without
substantial, possibly unrealistically sophisticated, accounting systems being
implemented”. Considering the need to distinguish between what is mandatory
(professional obligation) and what is aspirational (moral guide) when applying personal
ethical codes, [38] offered different contexts to explain to what degree can individuals
be held accountable in LA practices. An example on what might be considered a
mandatory code is: “I have a responsibility to act for the benefit of learners and to avoid
any action that would harm the learner and their educational opportunity”. The
following quote could be considered an aspirational personal code for individual
accountability in LA: “I will ensure that I understand analytic processes (algorithms,
statistics) that I employ. I will strive to promote accuracy, honesty and truthfulness in
the science, teaching and practice of learning analytics” [38].</p>
        <p>Transparency. The code also encouraged LA practitioners for more transparency: “I
will ensure that data practices are transparent to those whose data I work with” [38].
Yet, being transparent regarding LA practices seems not to be an individual call.
3.4</p>
      </sec>
      <sec id="sec-3-3">
        <title>Institutional Transparency</title>
        <p>Educational institutions may need to set policies that reveal information about what
data is collected, how they are used, etc., in ways that are technically and intellectually
accessible to all relevant parties [31]. As [22] agreed, providers of analytical services
have to demonstrate a transparent treatment for personal data. To make this possible,
[56] suggested that addressing the practical implementations of being transparent
regarding the collection and use of personal data could force companies and institutions
to address practical policies and clarify their thinking. In a later work, the authors
provided more insights on how higher education institutions should strive to be
transparent. They suggested that institutions should allow students to: (1) know what
data is collected, by whom, for what purposes, who will have access to this data
downstream and how data might be combined with other datasets (and for what
purposes); (2) be aware of the potential benefits that they may access in exchange for
their data; (3) access to, and feedback on, the analyses that result from collection of
their data, as this can support LA in its goal of not only providing institutions with a
clearer understanding of how students learn, but also what students find useful [69].
3.5</p>
      </sec>
      <sec id="sec-3-4">
        <title>Transparency and Data</title>
        <p>Transparency was considered a problematic affair since the first efforts in both research
and innovation within the LA field. While the issue of privacy was an alarm trigger to
the ethics of LA, issues of transparency and openness about tracking learners’ data have
been a corner stone in such discussions. The main reason for this early attention to
transparency is the nature of analytics as it derives from data. “It is not surprising that
many outstanding concerns in LA center on data” [66], and it is often said that lack of
transparency about data collection can cause unease among data subjects [22].
Therefor, “it should always be clear to a person that she is being tracked” [23].
3.6</p>
      </sec>
      <sec id="sec-3-5">
        <title>Implications of Transparency in LA</title>
        <p>Transparency for Understanding, Sense-making and Reflection. Investigations on
the appropriate use of data in online education asked whether the transformation of data
sets into measures and indicators is transparent and sensible [46]. Various LA
applications (dashboards, recommenders, predictors) have adopted the concept of
transparency as a method to support users’ understanding and sense-making. According
to [43], advances in visualization tools provide a great opportunity for researchers to
develop visualizations that can improve transparency and therefore increase awareness
and support reflection. An evaluation by [61] was conducted on a dashboard they had
created to “empower students to reflect on their own activity, and that of their peers, in
open learning environments”. [60]. Open Learner Models (OLMs) were regarded by
[37] as powerful means to enhance transparency, increase understandability and
support reflection.</p>
        <p>In a similar vein, [76] described how the use of analytics can be framed in a
pedagogical model, where students viewed the analytics as a guideline for
sensemaking that can empower them to regulate their learning process. For LA prediction
models, it was indicated that transparency related to the reasons why and how certain
predictions are made is essential in order for teachers and students to understand how
best to act upon the predictions [50]. Also, [26] showed how an LA recommendation
could make more sense when the rationale behind it is transparent for the learner.
According to a hypothesis by [47], “a more complex (i.e. black-box) model performs
better, while a transparent model, despite given less accurate results, may be more
valuable thanks to a higher degree of explainability”. Recently, a study was conducted
by [1] and aimed to investigate the impact of complementing Educational
Recommender Systems (ERSs) with transparent and understandable OLMs that
provide justification for their recommendations. The survey results indicated that
complementing an ERS with an OLM has an overall positive impact on the students’
engagement and enhances their acceptance of the system [1]. Additional work is needed
to generalize such findings by comparing the effect between a transparent
recommendation and a traditional black-box recommendation on students’ motivation
to follow the recommendation, and eventually, accept the tool [5, 49].</p>
      </sec>
      <sec id="sec-3-6">
        <title>Transparency for Acceptance and Adoption. It has been noticeable by the LA</title>
        <p>research community that transparency is one effective way toward more acceptance for
LA practices among users and stakeholders. An early heed of that was stated by [66] in
his effort toward envisioning LA as a research and practice domain: “A proactive stance
of transparency and recognition of potential learner and educator unease of analytics
may be helpful in preventing backlash”. This vision was supported by [10] who
suggested that transparency can effectively benefit LA in overcoming challenges
related to social acceptability. In addition, [73] found in a study aimed to understand
LA privacy issues through students’ own perception that transparency and
communication are key levers for LA adoption. As also argued by [13], transparent
modelling approaches such as decision trees allow teachers and learners to scrutinize
analytics suggestions and reflect on them, which can lead to more agency of teachers
and learners, therefore can lead to easier adoption.</p>
        <p>Transparency to Build Trust. One of the earliest attempts to put transparency in LA
innovation was by integrating a reputation system to a participatory learning platform
for the goal of facilitating trust between users, by making actions and feedback
transparent and allowing users to track their own learning and that of others [9]. Also,
[41] found that transparency regarding what data is used, who data is shared with, and
how algorithmic design choices are determined represent essential components for
building trustworthy educational predictive models. Another proposition by [64] goes
in line with discussions on the trustworthiness of AI, stating that providing educators
with a level of control on an LA tool can ensure that the models are transparent and do
not act as a black box for human interpretation.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Transparency and the Option to Opt-out. In several papers, Prinsloo and Slade</title>
        <p>presented the option to opt-out of the collection of certain types of data as a potential
way to increase transparency [55, 56, 67 and 68]. The review of eight LA policies by
[72] also indicated that multiple LA policies had taken such an option in consideration.
Examples on these considerations, as summarized in this review included that users
should be given the option to opt out of the data collection processes without any
consequences, and that LA mechanisms must allow specific data to be withdrawn at
any time. However, some other policies in this review stated that such an option is not
available, because of the impossibility of delivering courses and supporting students
without having their data stored in information systems [72].</p>
        <p>Transparency to Support LA Co-design. Incorporating different resources of LA
stakeholders and users (e.g. researchers, subject experts, students and teachers) into the
design of analytical tools can improve usability and usefulness of these systems [18].
According to this argument, challenges of power-balance in such a ‘co-creation
strategy’ for LA can be reduced through a clear distribution of roles and a high level of
transparency among the different co-designers. On a practical level, [59] provided a
student-centered design that applied deferent methods to engage students in the design,
development and evaluation of a student facing LA dashboard. Transparency was
underlined as a core contribution of this design, which “emphasis on fully utilizing the
user-centered process, not just for initial requirements gathering, so that the design and
development process of Student Facing LA systems is fully transparent, from the initial
analysis stage all the way to final evaluation” .</p>
        <p>Transparent LA Tools. Deferent perceptions have been proposed to describe when an
LA tool is considered transparent. According to [62], an analytical tool supports
transparency if users know what data about them is collected and who can see
information about them. A stricter view considered an LA tool transparent when the
users understand the whole process behind analytical outcomes [7].</p>
        <p>Transparent LA Research. A research method was presented by [29] as an approach
to conducting LA research. An important aspect of this method is the transparency on
how a research work might contribute to a ‘fully complete LA’. The method stated that
researchers should “articulate the extent to which their work is constituent and
contributes to an existing or future LA agenda, and/or it is aggregate and incorporates
prior LA constituent research, in order to deliver a more complete LA” [29].
3.7</p>
      </sec>
      <sec id="sec-3-8">
        <title>Institutional Accountability</title>
        <p>Institutions and policy makers have to ask, “How can we use algorithmic
decisionmaking in higher education to ensure, on the one hand, caring, appropriate, affordable
and effective learning experiences, and on the other, ensure that we do so in a
transparent, accountable and ethical way?” [58]. A paper by [33] showed how LA
process requirements can be derived from an existing privacy framework (i.e. GDPR)
by transforming legal requirements into systems requirements. This work provided a
list of design requirements for LA including that “the institutions must be able to
demonstrate that they have systems in place (policies and procedures) that uphold the
protection of personal information and minimize risk of breaches”. [33].
3.8</p>
      </sec>
      <sec id="sec-3-9">
        <title>Algorithmic Accountability</title>
        <p>Ways in which analytic devices become effective factors in learning has led to demands
for greater algorithmic accountability, to ensure the pedagogic goals of analytic devices
are transparent across all stakeholders [35]. As researchers should demand a rigorous
level of accountability from LA devices, educators and students should also be
encouraged to demand accountability to whatever level of detail they require [30]. LA
devices shape or are shaped by learning contexts; and to make them eligible for learners
and teachers they require careful analysis on the theory behind any given learning-target
[35]. Thus, the implications of LA are not only critical for human inference and decision
making, but also for algorithmic accountability [2].
3.9</p>
      </sec>
      <sec id="sec-3-10">
        <title>Accountable Learning</title>
        <p>The findings of a study by [32] showed that when the design of interactive features and
analytics focus on contextual knowledge, it could foster learning of the conceptual
knowledge that courses are typically accountable for. According to [44], “learning
analytics has the potential to shape the curriculum, through enabling new kinds of
learning practices that favor efficient and accountable ways of being over disciplinary
knowledge-building or knower-building”. For example, self-assessment can work as a
tool to make students accountable for their learning [53].
3.10</p>
      </sec>
      <sec id="sec-3-11">
        <title>Fair LA Outcomes</title>
        <p>Fair Measurement. As LA often aims to measure learning, [45] discussed issues
related to the fairness and validity of these measures. In her work toward establishing
methodological foundations of measuring learning in LA, she stated that the different
demographical and cultural backgrounds of participants can lead to biased responses to
indicators used to measure learning. “This means that the measures may be confounded,
causing unfairness for one group or another and certainly confusing any interpretations
about what is being measured” [45].</p>
        <p>Fair Instruction. Inaccurate data models about students can affect not only the learning
measurement but the learning itself too. In the context of LA algorithms used to inform
intelligent tutoring systems, [19] assumed that a fair outcome is when students from
different demographical backgrounds reach the same level of knowledge after receiving
instruction; no matter how long it took them to reach this level. On that, they proposed
that adaptive educational algorithms, such as knowledge tracing, can contribute to
preventing inequities between different groups of students by allowing them to go
through the curricula in their own pace. However, such adaptive educational algorithms
can still be unfair (e.g. favoring fast learners over slow learners) when they rely on
inaccurate models of student learning [19].</p>
        <p>Fair Prediction. Considering that predictive modelling has been one of the core
research areas in the field of LA, and with such models are deployed in a variety of
educational contexts, [28] presented a method for evaluating the fairness in predictive
student models through “slicing analysis”, an approach in which model performance is
evaluated across different categories of the data. Although they argued that most of the
prior work to define and measure predictive fairness are still insufficient for LA
research, the researchers indicated that LA have to satisfy the existing legal concepts
of fairness and should aspire even higher standers of fairness in the educational systems.
While slicing analysis as an exploratory methodology can be used only to measures
predictive fairness and not to correct it, they argued that measurement is a necessary
condition for correcting any detected unfairness [28]. In this context, a point of view
by [75] described LA dashboards as tools that offer a great promise to address
biasrelated challenges in prediction models, “as by visualizing the data used by predictive
models end-users can potentially be made aware of underlying biases”.
3.11</p>
      </sec>
      <sec id="sec-3-12">
        <title>LA to Support Well-being</title>
        <p>Educational institutions have legal and moral obligations to demonstrate care for the
wellbeing and growth of students, leading them to success in their education [22 and
57]. The support of student well-being was mentioned among the purposes that have
encouraged students, in a study by [73], to welcome the university collecting and using
of their data. In another study by [25] aimed to investigate perceptions of students and
instructors of the potential benefits and risks of using LA, instructors also considered
improving the overall learning experience and well-being of their students among the
most important uses of LA. It is in the interests of education providers to devote LA for
supporting students in developing social skills as well as domain knowledge [52].
Examples for such a potential include a paper by [11] aimed at exploring the potential
of LA for improving accessibility of e-learning and supporting disabled learners. This
work provided a comparative analysis of completion rates of disabled and non-disabled
students in online courses and outlined how LA can identify accessibility challenges
and disabled students’ needs [11].
3.12</p>
      </sec>
      <sec id="sec-3-13">
        <title>Value-sensitive LA Design</title>
        <p>A relevant paper by [8] introduced two cases of applying the Value Sensitive Design (a
methodology from the field of Human–Computer Interaction) to support ethical
considerations and system integrity in LA design. Both cases demonstrated that Value
Sensitive Design could be an applicable approach for balancing a wide range of ethical
and human values in the design and development of LA. Through a conceptual
investigation of an LA tool developed to visualize online discussions in a learning
platform, the researchers found that the following values supported by the LA tool can
be in tension with other values: autonomy, utility, ease of information seeking, student
success, accountability, engagement, usability, privacy, social wellbeing (in the sense
of belonging and social inclusion), cognitive overload, pedagogical decisions, freedom
from bias, fairness, self-image, and sense of community [8].</p>
      </sec>
      <sec id="sec-3-14">
        <title>Summary of Qualitative Results</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Topics</title>
      <p>(As ordered in section 3)</p>
      <sec id="sec-4-1">
        <title>FAT in LA Ethical Frameworks</title>
        <p>FAT in a Personal Code of Ethics</p>
      </sec>
      <sec id="sec-4-2">
        <title>LA to Support Well-being</title>
      </sec>
      <sec id="sec-4-3">
        <title>Value-sensitive LA Design</title>
        <p>4</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>LAK Papers
(As numbered in the References)
The global efforts toward positive impacts of AI-powered systems on humans’
wellbeing continue to establish societal guidelines for such systems to remain
humancentric, serving humanity’s values and ethical principles. Although the LA community
is increasingly concerned about ethics, the societal values framing the notion of
Responsible AI have been approached only to a limited extent and are scattered across
LA research. Most cases focus on transparency. Yet, truly research around positive
impacts of LA should be addressed from a holistic perspective that goes beyond
transparency and considers accountability and ways by which LA systems contribute
to diverse dimensions of human well-being in and beyond the educational scenarios.
To do so, there is a need for addressing metrics and techniques to help educational
technology stakeholders in safeguarding human values and well-being when they
design, develop, implement and evaluate LA tools and solutions.</p>
      <p>Acknowledgment. This work has been partially funded by the EU Regional
Development Fund and the National Research Agency of the Spanish Ministry of
Science and Innovation under project grants TIN2017-85179-C3-3-R,
RED2018102725-T. D. Hernández-Leo acknowledges the support by ICREA under the ICREA
Academia program. E. Hakami acknowledges the grant by Jazan University, Saudi
Arabia.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
37.
38.
50.
51.
63.
64.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Abdi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khosravi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sadiq</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Gasevic</surname>
          </string-name>
          . D.:
          <article-title>Complementing Educational Recommender Systems with Open Learner Models</article-title>
          .
          <source>In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK '20)</source>
          , pp.
          <fpage>360</fpage>
          -
          <lpage>365</lpage>
          . ACM, New York, NY, USA (
          <year>2020</year>
          ). https://doi.org/10.1145/3375462.3375520 Alhadad,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Knight</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            and
            <surname>Lodge</surname>
          </string-name>
          , J.:
          <article-title>Analytics-enabled teaching as design: reconceptualisation and call for research</article-title>
          .
          <source>In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK '18)</source>
          , pp.
          <fpage>427</fpage>
          -
          <lpage>435</lpage>
          . ACM, New York, NY, USA (
          <year>2018</year>
          ). DOI: https://doiorg.sare.upf.edu/10.1145/3170358.3170390 Ananny,
          <string-name>
            <given-names>M.</given-names>
            and
            <surname>Crawford</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability</article-title>
          .
          <source>Journal of New Media Soc</source>
          <volume>20</volume>
          (
          <issue>3</issue>
          ),
          <fpage>973</fpage>
          -
          <lpage>989</lpage>
          . (
          <year>2018</year>
          )
          <article-title>Balkin</article-title>
          ,
          <string-name>
            <surname>J M.</surname>
          </string-name>
          :
          <article-title>How mass media simulate political transparency</article-title>
          ,
          <source>Cultural Values</source>
          <volume>3</volume>
          (
          <issue>4</issue>
          ).
          <fpage>393</fpage>
          -
          <lpage>413</lpage>
          . (
          <year>1999</year>
          ). DOI:
          <volume>10</volume>
          .1080/14797589909367175 Bodily,
          <string-name>
            <given-names>R.</given-names>
            and
            <surname>Verbert</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Trends and issues in student-facing learning analytics reporting systems research</article-title>
          .
          <source>In: Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
          <fpage>309</fpage>
          -
          <lpage>318</lpage>
          . ACM, New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3027403 Bollier,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>The Promise and Peril of Big Data. The Aspen Institute</article-title>
          , CO, USA (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          https://assets.aspeninstitute.org/content/uploads/files/content/docs/pubs/The_Promise _and_Peril_of_Big_Data.pdf Buckingham Shum,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Sándor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Goldsmith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>Bass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            and
            <surname>McWilliams</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>Reflecting on reflective writing analytics: assessment challenges and iterative evaluation of a prototype tool</article-title>
          .
          <source>In: Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge (LAK '16)</source>
          , pp.
          <fpage>213</fpage>
          -
          <lpage>222</lpage>
          . ACM, New York, NY, USA (
          <year>2016</year>
          ). DOI: https://doi.org/10.1145/2883851.2883955 Chen,
          <string-name>
            <given-names>B.</given-names>
            and
            <surname>Zhu</surname>
          </string-name>
          , H.:
          <article-title>Towards Value-Sensitive Learning Analytics Design</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>343</fpage>
          -
          <lpage>352</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ). DOI: https://doi.org/10.1145/3303772.3303798 Clow,
          <string-name>
            <given-names>D.</given-names>
            and
            <surname>Makriyannis</surname>
          </string-name>
          , E.:
          <article-title>iSpot analysed: participatory learning and reputation</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge (LAK '11)</source>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>43</lpage>
          . ACM, New York, NY, USA (
          <year>2011</year>
          ). DOI= http://dx.doi.org/10.1145/2090116.2090121 Clow,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>The learning analytics cycle: closing the loop effectively</article-title>
          .
          <source>In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12)</source>
          , pp.
          <fpage>134</fpage>
          -
          <lpage>138</lpage>
          . ACM, New York, NY, USA (
          <year>2012</year>
          ). DOI: https://doi.org/10.1145/2330601.2330636 Cooper,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Ferguson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            and
            <surname>Wolff</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>What can analytics contribute to accessibility in e-learning systems and to disabled students' learning?</article-title>
          .
          <source>In: Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge (LAK '16)</source>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2016</year>
          ). DOI: https://doi.org/10.1145/2883851.2883946 Crawford,
          <string-name>
            <given-names>K.</given-names>
            and
            <surname>Schultz</surname>
          </string-name>
          , J.:
          <article-title>Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms</article-title>
          . Boston College Law Review
          <volume>55</volume>
          (
          <issue>93</issue>
          ).
          <fpage>13</fpage>
          -
          <lpage>64</lpage>
          . NYU Law and Economics Research Paper No.
          <fpage>13</fpage>
          -
          <lpage>36</lpage>
          . (
          <year>2014</year>
          ). Available at SSRN: https://ssrn.com/abstract=2325784 Cukurova,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            ,
            <surname>Spikol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            and
            <surname>Landolfi</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          :
          <article-title>Modelling Collaborative Problem-solving Competence with Transparent Learning Analytics: Is Video Data Enough?</article-title>
          .
          <source>In: Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK '20)</source>
          , pp.
          <fpage>270</fpage>
          -
          <lpage>275</lpage>
          . ACM, New York, NY, USA (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          https://doi.org/10.1145/3375462.3375484 Datta,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Tschantz</surname>
          </string-name>
          <string-name>
            <given-names>M</given-names>
            &amp;
            <surname>Datta</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <source>Automated Experiments on Ad Privacy Settings: A Tale of Opacity</source>
          , Choice, and Discrimination. (
          <year>2014</year>
          ). Available at: https://arxiv.org/abs/1408.6491 Diakopoulos,
          <string-name>
            <given-names>N.</given-names>
            and
            <surname>Koliska</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Algorithmic Transparency in the News Media</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>Journal of Digital Journalism</source>
          . (
          <year>2017</year>
          ).
          <source>DOI: 10.1080/21670811</source>
          .
          <year>2016</year>
          .1208053 Dignum, V.:
          <article-title>Responsible autonomy</article-title>
          .
          <source>In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)</source>
          . Melbourne,
          <string-name>
            <surname>Australia</surname>
          </string-name>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          https://doi.org/10.24963/ijcai.
          <year>2017</year>
          /655 Dignum,
          <string-name>
            <surname>V.:.</surname>
          </string-name>
          <article-title>Ethics in artificial intelligence: introduction to the special issue</article-title>
          .
          <source>Journal of Ethics and Information Technology</source>
          <volume>20</volume>
          (
          <issue>1</issue>
          ). (
          <year>2018</year>
          ). https://doi.org/10.1007/s10676- 018-9450-z
          <string-name>
            <surname>Dollinger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lodge</surname>
            ,
            <given-names>J M.:</given-names>
          </string-name>
          <article-title>Co-creation strategies for learning analytics</article-title>
          .
          <source>In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK '18)</source>
          , pp.
          <fpage>97</fpage>
          -
          <lpage>101</lpage>
          . ACM, New York, NY, USA: (
          <year>2018</year>
          ). DOI: https://doi.org/10.1145/3170358.3170372 Doroudi,
          <string-name>
            <given-names>S.</given-names>
            and
            <surname>Brunskill</surname>
          </string-name>
          , E.:
          <article-title>Fairer but Not Fair Enough On the Equitability of Knowledge Tracing</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>335</fpage>
          -
          <lpage>339</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          DOI: https://doiorg.sare.upf.edu/10.1145/3303772.3303838 Dörr,
          <string-name>
            <given-names>K N.</given-names>
            and
            <surname>Hollnbuchner</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          : Ethical Challenges of Algorithmic Journalism.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>Journal of Digital Journalism</source>
          <volume>5</volume>
          (
          <issue>4</issue>
          ),
          <fpage>404</fpage>
          -
          <lpage>419</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>DOI: 10.1080/21670811</source>
          .
          <year>2016</year>
          .1167612 Dourish,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Algorithms and their others: Algorithmic culture in context</article-title>
          .
          <source>Journal of Big data &amp; society</source>
          . (
          <year>2016</year>
          ). https://doi.org/10.1177/2053951716665128 Drachsler,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>and</article-title>
          <string-name>
            <surname>Greller</surname>
          </string-name>
          , W.:
          <article-title>Privacy and analytics: it's a DELICATE issue a checklist for trusted learning analytics</article-title>
          .
          <source>In: Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge (LAK '16)</source>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>98</lpage>
          . ACM, New York, NY, USA (
          <year>2016</year>
          ). DOI: https://doi.org/10.1145/2883851.2883893 Duval, E.:
          <article-title>Attention please!: learning analytics for visualization and recommendation</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge (LAK '11)</source>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>17</lpage>
          . ACM, New York, NY, USA (
          <year>2011</year>
          ). DOI= http://dx.doi.org/10.1145/2090116.2090118 European Social Survey.: Europeans'
          <article-title>Personal and Social Well-being</article-title>
          . (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          https://www.europeansocialsurvey.org/docs/findings/ESS6_
          <article-title>toplines_issue_5_persona l_and_social_wellbeing</article-title>
          .pdf Falcão,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Mello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Diniz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            and
            <surname>Gašević</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>Perceptions and expectations about learning analytics from a brazilian higher education institution</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>In Proceedings of the Tenth International Conference on Learning Analytics &amp; Knowledge (LAK '20)</source>
          , pp.
          <fpage>240</fpage>
          -
          <lpage>249</lpage>
          . ACM, New York, NY, USA (
          <year>2020</year>
          ). DOI: https://doi.org/10.1145/3375462.3375478 Ferguson,
          <string-name>
            <surname>R.</surname>
          </string-name>
          and
          <string-name>
            <given-names>Buckingham</given-names>
            <surname>Shum</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Social learning analytics: five approaches</article-title>
          .
          <source>In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12)</source>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>33</lpage>
          . ACM, New York, NY, USA (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>DOI=10.1145/2330601</source>
          .2330616 http://doi.acm.
          <source>org/10</source>
          .1145/2330601.2330616 Ferguson,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Brasher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Clow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Griffiths</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            and
            <surname>Drachsler</surname>
          </string-name>
          , H.:
          <article-title>Learning Analytics: Visions of the Future</article-title>
          .
          <article-title>In: 6th International Learning Analytics and Knowledge (LAK) Conference</article-title>
          . Edinburgh, Scotland. (
          <year>2016</year>
          )
          <article-title>Gardner</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Brooks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            and
            <surname>Baker</surname>
          </string-name>
          , R.:
          <article-title>Evaluating the Fairness of Predictive Student Models Through Slicing Analysis</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>234</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ). DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3303772.3303791 Gibson,
          <string-name>
            <given-names>A.</given-names>
            and
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>The pragmatic maxim as learning analytics research method</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK '18)</source>
          , pp.
          <fpage>461</fpage>
          -
          <lpage>465</lpage>
          . ACM, New York, NY, USA (
          <year>2018</year>
          ). DOI: https://doi.org/10.1145/3170358.3170384 Gibson,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Aitken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Sándor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Buckingham Shum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Tsingos-Lucas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            and
            <surname>Knight</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Reflective writing analytics for actionable feedback</article-title>
          .
          <source>In: Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          153-
          <fpage>162</fpage>
          . ACM, New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3027436 Haythornthwaite,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>An information policy perspective on learning analytics</article-title>
          .
          <source>In Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
          <fpage>253</fpage>
          -
          <lpage>256</lpage>
          . New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3027389 Hickey,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Kelley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            and
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          :
          <article-title>Small to big before massive: scaling up participatory learning analytics</article-title>
          .
          <source>In: Proceedings of the Fourth International Conference on Learning Analytics And Knowledge (LAK '14)</source>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>97</lpage>
          . ACM, New York, NY, USA (
          <year>2014</year>
          ). DOI= http://dx.doi.org/10.1145/2567574.2567626 Hoel,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Griffiths</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            and
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>W.:</surname>
          </string-name>
          <article-title>The influence of data protection and privacy frameworks on the design of learning analytics systems</article-title>
          .
          <source>In: Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>252</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3027414
          <string-name>
            <given-names>IEEE</given-names>
            <surname>P7010</surname>
          </string-name>
          <article-title>Draft Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being</article-title>
          .
          <source>In IEEE P7010/D4</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>103</lpage>
          . (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;
          <source>arnumber=8960682&amp;isnumber=8960 681 Knight</source>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            and
            <surname>Tall</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Dear learner: participatory visualisation of learning data for sensemaking</article-title>
          .
          <source>In: Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
          <fpage>532</fpage>
          -
          <lpage>533</lpage>
          . ACM, New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3029443
          <string-name>
            <surname>Kukulska-Hulme</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beirne</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conole</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costello</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coughlan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferguson</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , FitzGerald, E.,
          <string-name>
            <surname>Gaved</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herodotou</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holmes</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mac Lochlainn</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Nic Giolla Mhichíl,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Rienties</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Sargent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Scanlon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Sharples</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            and
            <surname>Whitelock</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          : Innovating Pedagogy 2020: Open University Innovation Report 8. Milton Keynes: The Open University. (
          <year>2020</year>
          ). https://iet.open.ac.uk/file/innovating-pedagogy-
          <year>2020</year>
          .pdf Kump,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Seifert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Beham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Lindstaedt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S N.</given-names>
            and
            <surname>Ley</surname>
          </string-name>
          , T.:
          <article-title>Seeing what the system thinks you know: visualizing evidence in an open learner model</article-title>
          .
          <source>In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12)</source>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          153-
          <fpage>157</fpage>
          . ACM, New York, NY, USA: (
          <year>2012</year>
          ). DOI: https://doi.org/10.1145/2330601.2330640 Lang,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Macfadyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L P.</given-names>
            ,
            <surname>Slade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Prinsloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            and
            <surname>Sclater</surname>
          </string-name>
          , N.:
          <article-title>The complexities of developing a personal code of ethics for learning analytics practitioners: implications for institutions and the field</article-title>
          .
          <source>In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK '18)</source>
          , pp.
          <fpage>436</fpage>
          -
          <lpage>440</lpage>
          . New York, NY, USA (
          <year>2018</year>
          ). DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3170358.3170396 Lee,
          <string-name>
            <surname>M K.</surname>
          </string-name>
          :
          <article-title>Understanding perception of algorithmic decisions : Fairness, trust, and emotion in response to algorithmic management</article-title>
          .
          <source>Journal of Big data &amp; society</source>
          , 1-
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          (
          <year>2018</year>
          ). https://doi.org/10.1177/2053951718756684 Lepri,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Oliver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Letouzé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Pentland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            and
            <surname>Vinck</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          : Fair, Transparent, and
          <article-title>Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges</article-title>
          .
          <source>Journal of Philosophy &amp; Technology</source>
          <volume>31</volume>
          (
          <issue>4</issue>
          ),
          <fpage>611</fpage>
          -
          <lpage>627</lpage>
          . (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brooks</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>The Impact of Student Opt-Out on Educational Predictive Models</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>411</fpage>
          -
          <lpage>420</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3303772.3303809 Long,
          <string-name>
            <given-names>P.</given-names>
            and
            <surname>Siemens</surname>
          </string-name>
          , G.:
          <article-title>Penetrating the Fog: Analytics in Learning and Education</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>Educause Review</source>
          <volume>46</volume>
          (
          <issue>5</issue>
          ),
          <fpage>30</fpage>
          -
          <lpage>32</lpage>
          . (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12)</source>
          , pp.
          <fpage>102</fpage>
          -
          <lpage>110</lpage>
          . ACM, New York, NY, USA (
          <year>2012</year>
          ). DOI: https://doi.org/10.1145/2330601.2330630
          <string-name>
            <surname>McPherson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Tong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H L.</given-names>
            ,
            <surname>Fatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S J.</given-names>
            and
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>Student perspectives on data provision and use: starting to unpack disciplinary differences</article-title>
          .
          <source>In: Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge (LAK '16)</source>
          , pp.
          <fpage>158</fpage>
          -
          <lpage>167</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2016</year>
          ). DOI: https://doi.org/10.1145/2883851.2883945 Milligan,
          <string-name>
            <surname>S K.</surname>
          </string-name>
          :
          <article-title>Methodological foundations for the measurement of learning in learning analytics</article-title>
          .
          <source>In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (LAK '18)</source>
          , pp.
          <fpage>466</fpage>
          -
          <lpage>470</lpage>
          . ACM, New York, NY, USA (
          <year>2018</year>
          ). DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3170358.3170391 Milligan,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          , Zhang, R. and
          <string-name>
            <given-names>I. P</given-names>
            <surname>Rubinstein</surname>
          </string-name>
          , B.:
          <article-title>Validity: a framework for cross-disciplinary collaboration in mining indicators of learning from MOOC forums</article-title>
          .
          <source>In: Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge (LAK '16)</source>
          , pp.
          <fpage>546</fpage>
          -
          <lpage>547</lpage>
          . ACM, New York, NY, USA (
          <year>2016</year>
          ). DOI: https://doi.org/10.1145/2883851.2883956 Niemeijer,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Feskens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Krempl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Koops</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            and
            <surname>Brinkhuis</surname>
          </string-name>
          ,
          <string-name>
            <surname>M:</surname>
          </string-name>
          <article-title>Constructing and Predicting School Advice for Academic Achievement: A Comparison of Item Response Theory and Machine Learning Techniques</article-title>
          .
          <source>In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK '20)</source>
          , pp.
          <fpage>462</fpage>
          -
          <lpage>471</lpage>
          . ACM, New York, NY, USA (
          <year>2020</year>
          ). https://doi.org/10.1145/3375462.3375486 OECD.
          <article-title>How's Life? 2017: Measuring Well-being</article-title>
          .
          <source>OECD Publishing</source>
          , Paris (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>https://doi.org/10.1787/how_life-2017-en.</mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Prinsloo</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>The increasing impossibilities of justice and care in open, distance learning</article-title>
          .
          <source>Presentation In: EDEN Research Workshop</source>
          , Oldenburg, Germany (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          http://www.slideshare.
          <article-title>net/prinsp/the-increasingimpossibilities-of-justice-and-care-inopen-distance-learning</article-title>
          <string-name>
            <surname>Quincey</surname>
          </string-name>
          , E.,
          <string-name>
            <surname>Briggs</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kyriacou</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Waller</surname>
          </string-name>
          , R.:
          <article-title>Student Centred Design of a Learning Analytics System</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>353</fpage>
          -
          <lpage>362</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ). DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3303772.3303793 Santos,
          <string-name>
            <given-names>J L.</given-names>
            ,
            <surname>Verbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            and
            <surname>Erik</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>Empowering students to reflect on their activity with StepUp!: two case studies with engineering students</article-title>
          .
          <source>In: Proceedings of ARETL'12 2nd workshop on Awareness and Reflection</source>
          .
          <volume>931</volume>
          .
          <article-title>(2012) Santos</article-title>
          ,
          <string-name>
            <given-names>J L.</given-names>
            ,
            <surname>Verbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Govaerts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            and
            <surname>Duval</surname>
          </string-name>
          , E.:
          <article-title>Addressing learner issues with StepUp!: an evaluation</article-title>
          .
          <source>In: Proceedings of the Third International Conference on Learning Analytics and Knowledge (LAK '13)</source>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>22</lpage>
          . ACM, New York, NY, USA (
          <year>2013</year>
          ). DOI= http://dx.doi.org/10.1145/2460296.2460301 Scheffel, M,.
          <string-name>
            <surname>Drachsler</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>and</article-title>
          <string-name>
            <surname>Specht</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Developing an evaluation framework of quality indicators for learning analytics</article-title>
          .
          <source>In: Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (LAK '15)</source>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>20</lpage>
          . ACM, New York, NY, USA (
          <year>2015</year>
          ). DOI: https://doi.org/10.1145/2723576.2723629 Shewbridge,
          <string-name>
            <surname>C</surname>
          </string-name>
          ; Fuster,
          <string-name>
            <given-names>M.</given-names>
            and
            <surname>Rouw</surname>
          </string-name>
          , R.:
          <article-title>Constructive accountability, transparency and trust between government and highly autonomous schools in Flanders</article-title>
          .
          <source>OECD Publishing</source>
          , Paris, France (
          <year>2019</year>
          ). https://doi.org/10.1787/c891abbf-en.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Shibani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Knight</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Buckingham</given-names>
            <surname>Shum</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Contextualizable Learning Analytics Design: A Generic Model and Writing Analytics Evaluations</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>210</fpage>
          -
          <lpage>219</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ). https://doiorg.sare.upf.edu/10.1145/3303772.3303785 Shin,
          <string-name>
            <given-names>D.</given-names>
            and
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y J.:.</surname>
          </string-name>
          <article-title>Role of fairness, accountability, and transparency in algorithmic affordance</article-title>
          .
          <source>Journal of Computers in Human Behavior</source>
          <volume>98</volume>
          (
          <issue>March</issue>
          ),
          <fpage>277</fpage>
          -
          <lpage>284</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          (
          <year>2019</year>
          ). https://doi.org/10.1016/j.chb.
          <year>2019</year>
          .
          <volume>04</volume>
          .019 Siemens, G.:
          <article-title>Learning analytics: envisioning a research discipline and a domain of practice</article-title>
          .
          <source>In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK '12)</source>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>8</lpage>
          . New York, NY, USA (
          <year>2012</year>
          ). DOI: https://doi.org/10.1145/2330601.2330605 Slade,
          <string-name>
            <given-names>S.</given-names>
            and
            <surname>Prinsloo</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Learning analytics: ethical issues and dilemmas</article-title>
          . American Behavioral Scientist. (
          <year>2013</year>
          ). https://doi.org/10.1177/0002764213479366 Slade,
          <string-name>
            <given-names>S.</given-names>
            and
            <surname>Prinsloo</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Student perspectives on the use of their data: between intrusion, surveillance and care</article-title>
          .
          <source>In: 8th EDEN Research Workshop</source>
          ,
          <volume>27</volume>
          -
          <fpage>28</fpage>
          October, Oxford, UK (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Slade</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prinsloo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Khalil</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Learning analytics at the intersections of student trust, disclosure and benefit</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge (LAK19)</source>
          , pp.
          <fpage>235</fpage>
          -
          <lpage>244</lpage>
          . ACM, New York, NY, USA (
          <year>2019</year>
          ). DOI: https://doi-org.
          <source>sare.upf.edu/10</source>
          .1145/3303772.3303796 Swenson, J.:
          <article-title>Establishing an ethical literacy for learning analytics</article-title>
          .
          <source>In: Proceedings of the Fourth International Conference on Learning Analytics And Knowledge (LAK '14)</source>
          , pp.
          <fpage>246</fpage>
          -
          <lpage>250</lpage>
          . New York, NY, USA (
          <year>2014</year>
          ). DOI= http://dx.doi.org/10.1145/2567574.2567613
          <string-name>
            <given-names>The</given-names>
            <surname>IEEE Global</surname>
          </string-name>
          <article-title>Initiative on Ethics of Autonomous and Intelligent Systems: Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition</article-title>
          . IEEE. (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          https://standards.ieee.org/content/ieee-standards/en/industryconnections/ec/autonomous-systems.html Tsai,
          <string-name>
            <given-names>Y.</given-names>
            and
            <surname>Gasevic</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>Learning analytics in higher education --- challenges and policies: a review of eight learning analytics policies</article-title>
          .
          <source>In: Proceedings of the Seventh International Learning Analytics &amp; Knowledge Conference (LAK '17)</source>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          New York, NY, USA (
          <year>2017</year>
          ). DOI: https://doi.org/10.1145/3027385.3027400 Tsai,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Whitelock-Wainwright</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            and
            <surname>Gašević</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>The privacy paradox and its implications for learning analytics</article-title>
          .
          <source>In: Proceedings of the Tenth International Conference on Learning Analytics &amp; Knowledge (LAK '20)</source>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>239</lpage>
          . ACM, New York, NY, USA (
          <year>2020</year>
          ). DOI: https://doi.org/10.1145/3375462.3375536 UNESCO Institute for Information Technologies in Education.: Learning Analytics.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          (
          <year>2012</year>
          ). Retrieved from: https://iite.unesco.org/publications/3214711/ Verbert,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Ochoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>De</surname>
          </string-name>
          <string-name>
            <surname>Croon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Dourado</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          and
          <string-name>
            <given-names>De</given-names>
            <surname>Laet</surname>
          </string-name>
          . T.:
          <article-title>Learning analytics dashboards: the past, the present and the future</article-title>
          .
          <source>In: Proceedings of the Tenth International Conference on Learning Analytics &amp; Knowledge (LAK '20)</source>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2020</year>
          ). DOI: https://doi.org/10.1145/3375462.3375504 Wise,
          <string-name>
            <given-names>A F.</given-names>
            ,
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            and
            <surname>Hausknecht</surname>
          </string-name>
          ,
          <string-name>
            <surname>S N.</surname>
          </string-name>
          :
          <article-title>Learning analytics for online discussions: a pedagogical model for intervention with embedded and extracted analytics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <source>Proceedings of the Third International Conference on Learning Analytics and Knowledge (LAK '13)</source>
          , pp.
          <fpage>48</fpage>
          -
          <lpage>56</lpage>
          . ACM, New York, NY, USA (
          <year>2013</year>
          ). DOI: https://doi.org/10.1145/2460296.2460308 Yang,
          <string-name>
            <given-names>K.</given-names>
            and
            <surname>Stoyanovich</surname>
          </string-name>
          , J.:
          <article-title>Measuring Fairness in Ranked Outputs</article-title>
          .
          <source>In: Proceedings of the 29th International Conference on Scientific and Statistical Database Management (SSDBM '17). Article 22</source>
          , 6 pages. ACM, New York, NY, USA (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>DOI: https://doi.org/10.1145/3085504.3085526</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>