<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IADIS International Journal on WWW/Internet
8 (2010) 59-84.
[6] A. Sala</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1080/10447318.2023.2241609</article-id>
      <title-group>
        <article-title>Evaluating automatically adapted public e-services: first insights from users with low vision</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Myriam Arrue</string-name>
          <email>myriam.arrue@ehu.eus</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aritz Sala</string-name>
          <email>asala020@ikasle.ehu.eus</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xabier Gardeazabal</string-name>
          <email>xabier.gardeazabal@ehu.eus</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>EGOKITUZ, Faculty of Informatics, University of the Basque Country (UPV/EHU)</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2013</year>
      </pub-date>
      <volume>2021</volume>
      <fpage>5827</fpage>
      <lpage>5844</lpage>
      <abstract>
        <p>As e-government services become increasingly fundamental to public administration, ensuring their accessibility is both a legal and ethical imperative. Web forms are a ubiquitous feature of public e-services, but they frequently present interaction challenges, particularly for users with disabilities. This paper presents an experimental study involving ten participants with low vision who interacted with four Spanish public e-services in both their original and automatically adapted versions. These adaptations were generated in real-time using an adaptation system based on a previously established model including annotation ontologies for web forms. The study focuses on subjective user feedback collected through post-task questionnaires and final interviews. Findings indicate that the adapted interfaces were well accepted. Participants reported greater ease of use and high levels of satisfaction. The results underscore the value of dynamic, personalized adaptations in improving accessibility-in-use for public digital services.</p>
      </abstract>
      <kwd-group>
        <kwd>Web accessibility</kwd>
        <kwd>public e-Services</kwd>
        <kwd>low vision</kwd>
        <kwd>adapted interfaces</kwd>
        <kwd>user experience</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR</p>
      <p>ceur-ws.org
with four original public Spanish e-services with their adapted versions. The adapted versions were
dynamically generated at run-time by an automated adaptation system, developed based on the model
introduced in [9]. The results discussed in this paper focus on subjective data collected through standard
post-task questionnaires and a final user interview. The initial analysis reveals promising outcomes
for the proposed system, with participants expressing a positive reception and high acceptance of the
adapted interfaces.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>To assess the accessibility of e-government services, numerous studies have been conducted over the
past decade based on the standards mentioned earlier. These studies consistently reveal that many
government websites still fail to comply with current accessibility regulations. Notable examples
include evaluations of government websites in Slovenia [10], Alabama [11], Italy [12], and several
African countries [13]. Although these works commonly employed automated accessibility evaluation
tools, most were limited to analyzing homepage compliance with WCAG guidelines [14], and few
incorporated users with disabilities in the evaluation process. As highlighted in [15], special attention
must be paid when assessing the accessibility of web forms. These components play a crucial role in
e-government services, yet very few studies evaluate the complete interaction flow of web forms. The
design of forms significantly impacts both accessibility and usability. Despite their importance, there is
currently no international standard for accessible form design patterns. In the absence of such standards,
national guidelines are often used—for example, the United Kingdom’s design system [ 16] and Norway’s
ELMER guidelines [17]. In the United States, a similar initiative seeks to standardize the design of public
e-service forms across federal agencies to ensure consistent and inclusive access [18]. These eforts aim
to simplify interaction and enhance communication between citizens and public administrations. For
users with low vision, completing online forms can pose significant challenges. When using assistive
technologies such as screen magnifiers, users often need to shift their field of vision frequently, which
can result in a loss of context. Many magnification techniques also require horizontal scrolling, which
negatively impacts usability. A study presented in [19] compared the efectiveness of screen magnifiers
with responsive web designs, concluding that responsive layouts requiring less horizontal scrolling
were more user-friendly. The Accessibility Requirements for People with Low Vision working draft
published by the W3C [20] outlines key visual presentation needs (e.g., font, size, and text style), but it
lacks guidance on interactive components commonly found in web forms, such as input fields, labels,
action buttons, and feedback messages. One promising direction to improve accessibility is the use of
adaptive interfaces. Adapted versions of web interfaces can ofer more accessible and usable experiences
tailored to the needs of specific users. Several transcoding systems have been proposed in this regard
[21, 22, 23], but these approaches typically target isolated pages or specific impairments and lack a
comprehensive model for adapting full e-service systems, particularly those involving complex forms.
The present study aims to address this gap by comparing user interactions with original, oficially
compliant public e-service interfaces and with adapted versions generated at runtime by an automated
system. This system builds on the authors’ previous work [9] and is designed to improve accessibility
and usability for users with low vision through dynamic interface adaptation.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Automated adaptation system</title>
      <sec id="sec-3-1">
        <title>3.1. Architecture of the system</title>
        <p>Figure 1 shows the general architecture of the automated adaptation tool implemented. It is based on a
client/server model. The model of the system was thoroughly described in [9]. The client was developed
as an add-on for the Firefox browser, whereas the server is composed of the adaptation engine and
several repositories: annotations repository and knowledge base. The annotations repository contains
the annotations of the public e-services to be adapted and the knowledge base holds the adaptation</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Annotations of e-services</title>
        <p>Each e-service to be adapted must have an associated annotation file stored in the annotations repository.
This study focuses specifically on public e-services that require users to complete online forms. Typically,
these processes involve multiple steps—many of which are common across diferent e-services—and
users are often required to provide similar data. These shared characteristics led us to develop an
annotation model capable of covering the overall process, individual steps, and the specific data required
from users. The primary goal of these annotations is to extract valuable information to facilitate the
automatic application of selected adaptation techniques, tailored to user features and preferences. Figure
2 illustrates the ontology designed for annotating the e-services. Currently, the annotation process is
entirely manual, as described in [9].</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Adaptation techniques</title>
        <p>The adaptation techniques are stored in the knowledge base. In the case of this experimental study,
they are oriented to improve the navigation experience of people with low vision by:</p>
        <sec id="sec-3-3-1">
          <title>1. Providing uniform style for e-services 2. Structuring the process in steps 3. Ordering steps and grouping components in steps</title>
        </sec>
        <sec id="sec-3-3-2">
          <title>4. Identifying steps and overall progress of the process</title>
          <p>
            5. Changing the presentation to high contrast colours
6. Resizing components
7. Replacing the selection components (DataSelection)
These adaptation techniques were selected based on previous work. The first four techniques (
            <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1–4</xref>
            ) are
aimed at improving user orientation during the fulfillment process and were developed as a result of the
ELMER project [17]. Adaptations 5 and 6 are defined according to the Accessibility Requirements for
People with Low Vision [20]. The final adaptation (7) was proposed based on earlier authors’ studies
involving people with low vision [6, 24, 25] which indicated that participants felt more comfortable
when options were presented as links rather than through radio buttons or select components.
          </p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Example of adapted e-service</title>
        <p>The adaptation system automatically generates adapted versions of e-services at run-time. It segments
the form into its components and presents the user with a step-by-step process, showing simplified
interfaces that facilitate data entry. Figure 3 illustrates the original interface of the OSA e-service initial
form page alongside the adapted versions created by the system, based on the defined annotations and
adaptation techniques.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental study</title>
      <sec id="sec-4-1">
        <title>4.1. Participants</title>
        <p>Participants were recruited with the help of two associations for people with visual impairments: Begiris
Elkartea in Gipuzkoa and Itxaropena Elkartea in Araba. The inclusion criteria required participants
to be adult users with visual disabilities who regularly use screen magnifiers and have experience
browsing the web. A total of ten participants (P1–P10) took part in the study, with an average age of
41 years (SD = 12.2). Among them, three were men (P3, P5, and P10) and seven were women. One
participant (P9) was legally blind, while the remaining participants had low vision. The experiment
was conducted in a single session lasting approximately one and a half hours. Participants used their
usual assistive technologies during the session, with browser zoom being the most common tool. All
participants reported high levels of computer literacy and web browsing experience (more than twice a
week). The experimental sessions were held at two locations: a laboratory at the Faculty of Informatics
in Donostia and a laboratory at the School of Engineering in Gasteiz (both locations within University
of the Basque Country). Table 1 presents detailed information about the participants.</p>
        <p>The same desktop computer was used in all sessions, a Dell computer running a 64-bit version of
Windows 10 Enterprise LTSC. A widescreen LCD monitor (aspect ratio 16:10) with a 24-inch diagonal and
a resolution of 1920×1200 pixels was employed to display the public e-services. Assistive technologies
used varied among participants: Windows Magnifier (P1, P3, P7), standard browser zoom (P2, P4, P8,
P9, P10), and Dragnifier (P5, P6). All sessions were video-recorded, and each participant received
compensation in the form of a €50 gift ticket for a department store. The study received ethical approval
from the Ethics Committee for Human Research at the University of the Basque Country (CEISH-UPV/
EHU, M10_2019_037).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Stimuli</title>
        <p>Four public Spanish administration websites were selected as stimuli for this exploratory study: OSA,
the public website of the public health service in the Basque Country; SGS, the public website of Social
Security; DNI, the public website of the Ministry of Home Afairs; and DON the public website of the
City Hall of Donostia-San Sebastián. All of these websites comply with web accessibility standards.
Although each website features distinct content and design, they all ofer users the service of scheduling
an appointment, which is provided through a multi-step web form. Users are required to enter personal
data and select an appointment date. The process involves interacting with various elements such
as text inputs, radio buttons, selection areas, captchas, and buttons. To streamline the experiment,
we developed an initial web page (INIw) that contains direct links to the first step of each web form.
Participants began all tasks on this page, selecting the corresponding link to complete the tasks outlined
in the study.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Methodology</title>
        <p>A within-subjects design was used in this study, where each participant completed the same
tasks—scheduling an appointment through a web form—using two diferent interface versions: the
original and an adapted version.</p>
        <p>The independent variable was the interface version (original vs. adapted). The dependent variables
were post-task satisfaction, measured using the After Scenario Questionnaire (ASQ) and user emotions,
assessed after each task using the Self-Assessment Manikin (SAM).</p>
        <p>The defined null hypothesis was the following: ”There are no significant diferences in user satisfaction
(ASQ scores) or emotional responses (SAM dimensions) between the original and adapted interface.”</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Procedure and tasks</title>
        <p>
          First, participants were briefed on the purpose of the experiment and signed a consent form.
Demographic and expertise information was collected through a brief pre-session interview. Prior to starting
the session, participants were encouraged and assisted to adjust the system to meet their preferences.
The required assistive technology for each participant had been pre-configured and installed correctly on
the system. All sessions were video-recorded, and interaction data—including page loads, task start and
end times, visited links, scroll actions, and clicks—was captured via a locally implemented application
developed by the research group. Next, participants were asked to complete a set of tasks, consisting
of two tasks on each website: one using the original design and one using the adapted interface. The
order of presentation was counterbalanced. For each task, participants were required to complete the
web form to schedule an appointment with the service. All tasks began from the initial webpage (INIw),
where the experimenter instructed participants to select the corresponding direct link based on the
assigned task. At the start of each task, participants were provided with both an oral explanation and a
printed explanation. For example, in the DNI task the instructions were: “Please get an appointment to
renew your ID card by selecting this link,” with the experimenter clearly indicating the correct link. To
address concerns about entering personal data, detailed sample information was provided—printed in
large format—for each task, which participants were instructed to use when filling out the forms. Each
task had a completion time limit of 10 minutes. Upon task completion, participants participated in an
interview and completed the After-Scenario Questionnaire (ASQ) [ 26] to rate their satisfaction with
the e-service. The ASQ included three closed-ended Likert-scale (
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1–7</xref>
          ) questions assessing satisfaction
regarding the ease of task completion, the time required, and the support information provided by
the system. Additionally, participants rated their subjective experience of using the e-service via the
Self-Assessment Manikin (SAM) [27] and answered two extra questions related specifically to their
experience inputting the required information into the forms. Comments and feedback regarding the
experience and any barriers encountered were annotated by the experimenter throughout the sessions.
Finally, at the end of the experiment, participants were interviewed to evaluate the adaptation system,
the automatically generated adapted interfaces, and to discuss their overall preferences between the
original and adapted versions.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <sec id="sec-5-1">
        <title>5.1. ASQ questionnaire results</title>
        <p>Participants were required to complete the After-Scenario Questionnaire (ASQ) at the end of each task
to gather their subjective perceptions regarding their experience with the public e-service. This
questionnaire includes three items, each rated on a 7-point Likert scale (1 indicating the lowest satisfaction
and 7 the highest): the first one rating the easiness for completing the task (ASQ1), the second one
rating the appropriateness of the required time for completing the task (ASQ2) and the third one rating
the convenience of the feedback provided by the e-service (ASQ3). Figure 4 presents the values provided
by participants for each e-service, comparing the original version (ASQ1, ASQ2, ASQ3) and the adapted
version (ASQ1_A, ASQ2_A, ASQ3_A). The overall average score for ASQ1 across all e-services was 5.02
(SD = 1.59). The DONe- service received the highest average score (5.9, SD = 0.87), while SGS and DNI
had the lowest (4.5 on average, with SD = 1.9 for SGS and SD = 1.35 for DNI). In the adapted versions,
the scores for ASQ1_A improved for all e-services, with an overall average of 6.57 (SD = 0.63). DON
again received the highest rating (6.8, SD = 0.42), and SGS showed the greatest improvement, with a 2.2
point increase (from 4.5 to 6.7, SD = 0.48). All e-services obtained higher ratings in the adapted version,
with the smallest improvement being 0.9 points in the case of DON. The average ASQ2 score for all
e-services in the original version was 4.7 (SD = 1.85). DON scored the highest (5.8, SD = 1.39), while
DNI scored the lowest (3.9, SD = 2.02). For the adapted versions, the average ASQ2_A score increased
by 1.72 points, reaching 6.42 (SD = 1.05). Again, DON scored the highest (6.8, SD = 0.63), while DNI,
although having the lowest among the adapted versions, improved by 1.5 points (from 3.9 to 5.6, SD
= 1.57). The original version of the e-services had an average ASQ3 score of 4.77 (SD = 1.73). DON
received the highest score (5.8, SD = 1.31), while SGS had the lowest (4.1, SD = 1.59). In the adapted
versions, the overall average for ASQ3_A increased to 6.6 (SD = 0.7). Notably, OSA showed a 1.9-point
improvement, rising from 5.0 (SD = 1.82) to 6.9 (SD = 0.31). SGS also saw a significant increase, reaching
6.5 (SD = 0.7), and DNI had the lowest ASQ3_A score, though still relatively high at 6.2 (SD = 1.03). We
performed the Wilcoxon signed-rank test for analyzing the diferences between values gathered in both
versions the original and the adapted interfaces. The results showed the diferences were statistically
significant for all ASQ questions (V = 552.5, p-value &lt; 0.01 for ASQ1 and ASQ1_A; V = 488, p-value &lt;
0.01 for ASQ2 and ASQ2_A; V = 521.5, p-value &lt; 0.01 for ASQ3 and ASQ3_A).</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. SAM questionnaire result</title>
        <p>Participants completed a questionnaire based on the Self-Assessment Manikin (SAM) after each task to
assess their emotional responses during their interaction with the public e-service. This instrument
includes three items rated on a Likert scale ranging from -4 to 4 (where -4 indicates the lowest and 4
the highest intensity): the first item (SAM1) assesses the level of pleasure experienced during the task,
the second (SAM2) evaluates the motivation to engage in the task, and the third (SAM3) measures the
perceived sense of control or dominance over the situation. Figure 5 presents the values reported by
participants for each e-service, both for the original version (SAM1, SAM2, SAM3) and for the adapted
version (SAM1_A, SAM2_A, SAM3_A). For SAM1, the overall average score across all e-services was
1.02 (SD = 2.53). The highest value was obtained by DON (2.1, SD = 1.59), while OSA and SGS showed
the lowest scores (0.6, SD = 2.75 for OSA; 0.6, SD = 2.71 for SGS). In the adapted versions, all e-services
reported higher average values for SAM1_A, with an overall mean of 2.77 (SD = 1.94). The OSA e-service
stood out with the highest increase, reaching 3.2 (SD = 1.68), which represents a 2.6 point improvement.
Even in the case of DON, which initially had the highest score, an increase to 3.1 (sd = 1.66) was
observed. Regarding SAM2, the overall average score was 1.3 (sd = 2.4), with DON again obtaining
the highest mean (2.2, SD = 1.47) and DNI the lowest (0.7, SD = 2.86). The adapted versions improved
considerably, with an overall average score of 2.95 (SD = 1.69). DON saw the smallest increase (from 2.2
to 3.3, SD = 1.33), while OSA, SGS, and DNI showed more substantial improvements: 2.0, 2.2, and 1.3
points respectively. For SAM3, the overall average score was 1.45 (SD = 2.65). DON once again reported
the highest score (2.7, SD = 1.76), and SGS the lowest (0.7, SD = 2.83). In the adapted interfaces, the
overall mean for SAM3_A rose to 3.3 (SD = 1.47). SGS improved by 2.3 points (to 3.0, SD = 1.88), and
OSA experienced the greatest increase of all e-services (from 1.4 to 3.8). DON, which already scored
relatively high, still showed a notable improvement to 3.9 (SD = 0.31), representing a 1.2 point increase.
Statistical analysis using the Wilcoxon signed-rank test confirmed that the diferences between the
original and adapted interfaces were significant for all SAM dimensions (V = 338, p-value &lt; 0.01 for
SAM1 and SAM1_A; V = 439.5, p-value &lt; 0.01 for SAM2 and SAM2_A; V = 370, p-value &lt; 0.01 for SAM3
and SAM3_A).</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Acceptance of adapted versions</title>
        <p>At the end of the experiment, participants were interviewed regarding their acceptance of the adaptation
system. Specifically, two questions were asked: QUE1, concerning their preference between the original
and the adapted versions of the e-services, and QUE2, regarding their willingness to use the adapted
versions on a daily basis. All participants expressed a preference for the adapted versions and stated
they would use them in their daily life, except for participant P6. This participant was reluctant to trust
the adapted versions, expressing concern about potentially missing information due to the simplified
nature of the interfaces and the fact that they were not the oficial ones. As a result, she was uncertain
about her preference and her willingness to use the adapted versions in her daily routine. Several
participants highlighted positive aspects of the adapted versions during the interview. These are some
of the comments they shared: P1: “I liked the contrast and the bigger format of text. I did not need
to use the magnifier with the adapted versions.” P4: “The adapted versions are simplified containing
only the main content and avoiding distractors. I liked them and I was more eficient.” P7: “I liked the
step by step format of the adapted version of DNI to select the location.” P8: “I do not like long forms
and prefer the reduced step by step forms of the adapted versions.” P9: “I like the colors, contrast and
simplified interfaces.” P10: “The adapted version is better. I appreciate that it’s simpler.”</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>All the evaluated e-services are legally accessible according to WCAG guidelines. Although participants
were able to complete the tasks using the original versions, the ASQ questionnaire results showed a clear
preference for the adapted versions across all aspects. Even the DON e-service, which already obtained
the highest ratings among the original versions, saw an improvement in average scores in terms of ease
of use, perceived eficiency, and quality of feedback. From an emotional perspective, participants also
responded more positively to the adapted interfaces. According to the SAM questionnaire results, these
versions elicited higher levels of pleasure, motivation, and a greater sense of control. The adaptation
system was well accepted. Most participants preferred the adapted interfaces and appreciated the
applied modifications, such as simplified layouts, step-by-step data entry, improved contrast, and larger
font sizes. Several participants explicitly stated they would like to use the adapted versions in their
daily interactions with public e-services. Notably, participant P1 mentioned that she did not need to use
a screen magnifier when using the adapted interfaces. The adaptations focused primarily on enhancing
the usability of interactive form components. However, some oficial elements, such as institutional
logos, were not retained. Based on participant P6’s concerns regarding trust and perceived legitimacy,
future versions could consider including visual cues or annotations to reinforce the oficial appearance
of the adapted interfaces. These findings are consistent with previous research that emphasizes the
challenges users with low vision encounter when interacting with web forms. For instance, studies
such as [6] and [24] have highlighted the limitations of standard accessibility approaches and the need
for interface simplification and enhanced visual contrast. The improvements observed in participant
satisfaction and emotional response in the present study align with earlier work that advocates for
adaptive and personalized interfaces [21, 23]. Furthermore, the positive reception of the step-by-step
structure and high-contrast designs reflects principles recommended by national guidelines, such as
ELMER [17] and the UK Government Design System [16]. Additionally, the e-services that applied
the adaptation system achieved a consistent structure and design across interfaces, in line with the
standardized templates described in [18], which provide uniform layouts for home pages, identification
pages, and web forms to facilitate user navigation. Therefore, this study not only confirms previous
knowledge but also extends it by demonstrating the efectiveness of real-time adaptation systems in
public e-services.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Limitations of the study</title>
      <p>Despite the promising findings, this study presents several limitations. First, the number of e-services
analyzed was limited to only four, all focused on appointment-scheduling tasks within Spanish public
administration platforms. However, the studied e-services included a variety of interaction components
that may be representative of a broader range of public e-services. Second, the study involved a
relatively small group of participants—ten individuals with low vision. While participants reported
similar experiences and perceptions, the small sample size limits the statistical power and generalizability
of the findings.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusions and future work</title>
      <p>This study analysed dynamically adapted interfaces in enhancing the accessibility and usability of web
forms within public e-services for individuals with low vision. The experimental evaluation, conducted
with ten participants, revealed that the adapted versions were not only well received but also led to
a noticeable improvement in ease of use and overall user satisfaction. These findings underscore the
potential of real-time, personalized adaptations to mitigate interaction challenges and promote digital
inclusivity in public e-service platforms. Despite these promising results, further research is needed
to verify the scalability and long-term benefits of such adaptive systems across a broader spectrum of
public digital services and user groups. Future studies should consider larger sample sizes and include
users with diverse accessibility needs to develop a more comprehensive understanding of adaptive
interventions. Moreover, exploring objective performance metrics could provide deeper insights into
user experience and system eficiency.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>The authors thank all the individuals who participated in this study as well as the support given by
Begiris and Begisare associations. This work was funded by the Department of Economic Development
and Competitiveness of the Basque Government (ADIAN, IT1437-22).</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4 in order to: Grammar and spelling check
and Improve writing style. After using this tool, the authors reviewed and edited the content as needed
and take full responsibility for the publication’s content.
2020, pp. 1–5. doi:10.1145/3371300.3383350.
[25] A. Sala, M. Arrue, J. Pérez, S. Espín-Tello, Towards less complex e-government services for people
with low vision, ACM SIGACCESS Accessibility and Computing 128 (2020) 1–10. doi:10.1145/
3441497.3441499.
[26] J. Lewis, A psychometric evaluation of an after-scenario questionnaire for computer usability
studies: The asq., ACM SIGCHI Bulletin 23 (1991) 78–81. doi:10.1145/122672.122692.
[27] M. Bradley, P. Lang, Measuring emotion: The self-assessment manikin and the
semantic diferential, Journal of Behavior Therapy and Experimental Psychiatry 25 (1994) 49–59.
doi:10.1016/0005-7916(94)90063-9.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] United nations e-government survey 2024: Accelerating digital transformation for sustainable development -</article-title>
          with
          <source>the addendum on artificial intelligence</source>
          ,
          <year>2024</year>
          . URL: https://digitallibrary.un. org/record/4061269.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Directive</surname>
          </string-name>
          <article-title>(eu) 2016/2102 of the european parliament and of the council of 26 october 2016 on the accessibility of the websites and mobile applications of public sector bodies</article-title>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Directive</surname>
          </string-name>
          <article-title>(eu) 2019/882 of the european parliament and of the council of 17 april 2019 on the accessibility requirements for products</article-title>
          and services,
          <year>2019</year>
          . URL: https://eur-lex.europa.eu/eli/dir/ 2019/882/oj.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Money</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lines</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fernando</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Elliman, e
          <article-title>-government online forms: design guidelines for older adults in europe</article-title>
          ,
          <source>Universal Access in the Information Society</source>
          <volume>10</volume>
          (
          <year>2010</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s10209- 010- 0191- y.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>