<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Validation of the Digital Teaching Competence Questionnaire (COMDID-A) in the Mexican context</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oscar Daniel Gómez-Cruz</string-name>
          <email>oscargomez@uninnova.mx</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>María Concepción Villatoro-Cruz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ricardo Miguel Maldonado-Domínguez</string-name>
          <email>ricardommd.rmd@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eliana Gallardo-Echenique</string-name>
          <email>eliana.gallardo@upc.edu.pe</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Tecnológico Nacional de México/Instituto Tecnológico de Minatitlán</institution>
          ,
          <addr-line>Minatitlán, Veracruz, 96848</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universidad Autónoma de Chiapas</institution>
          ,
          <addr-line>Tuxtla Gutiérrez, Chiapas, 29050</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universidad Peruana de Ciencias Aplicadas</institution>
          ,
          <addr-line>Lima 15023</addr-line>
          ,
          <country country="PE">Perú</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Universidad del País Uninnova</institution>
          ,
          <addr-line>Tuxtla Gutiérrez, Chiapas, 29060</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>We present a validation of the Digital Teaching Competence Questionnaire (COMDID-A), used to measure the level of digital competencies of teachers at the Autonomous University of Chiapas. A documentary review of the existing instruments to measure the digital competencies of teachers was conducted. The COMDID-A instrument was selected due to its focus on the evaluation of teachers' digital competences in the university context, as well as its adaptability to different cultural and linguistic contexts. Subsequently, a thorough analysis of the instrument was conducted, based on theoretical references and the experience of experts in the area of education. Adjustments and modifications were made to the original instrument to adapt it to the Mexican context and improve its relevance and reliability. The results obtained indicate that the COMDID-A instrument is dependable and relevant for its use in the Mexican context. Quantitative and qualitative analyzes show that the instrument is capable of effectively measure digital competencies.</p>
      </abstract>
      <kwd-group>
        <kwd>Cross-cultural adaptation</kwd>
        <kwd>expert judgment</kwd>
        <kwd>content validity</kwd>
        <kwd>digital competence1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        After the outbreak of the COVID-19 pandemic, the prevailing need to train teachers in digital skills
became evident [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Although the pandemic took many higher education institutions by surprise,
the incorporation of digital technologies into classrooms was already underway, although
insufficiently [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The emerging use of technologies to mitigate the effects of the global shutdown
revealed that, in many cases, the assessments conducted were not adequate to determine the
level of intervention required and the areas where the need for training is most critical [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This
scenario has resulted in numerous trainings which not meet the specific demands related to the
incorporation of technologies in the education of students [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In addition, the lack of a strategic
focus on teacher training has produced an unequal adoption of digital tools, which in turn affects
the quality of education [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Therefore, it is key not only to identify areas for improvement, but
also to develop a teacher training model that is comprehensive, flexible and adapted to the
specific context of each educational institution [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        In this sense, training in digital skills must go beyond mere technical instruction; it must
incorporate pedagogical elements that allow teachers to effectively apply technologies in their
educational practice [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This holistic approach will not only improve the quality of teaching, but
will also contribute to a more inclusive and equitable education, preparing students for the
challenges of the 21st century [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To realize this comprehensive approach in teacher training, it
is key to have assessment tools that reflect these complexities [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In this regard, there are
different proposals to evaluate the level of digital competence, as well as various reference
models or performance standards [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15">10–15</xref>
        ]adopted by some Latin American countries.
      </p>
      <p>
        In this context, we have chosen to use the Digital Teaching Competence Questionnaire
(COMDID-A) prepared by Lazaro-Cantabrana et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] to evaluate the self-perception of Spanish
teachers [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. This selection was based on their multidimensional approach, which encompasses
four key dimensions of teaching [18]. It encourages teacher self-reflection and autonomy,
provides instant feedback, contrasted with other instruments [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15">10–15</xref>
        ]. In addition, it has been
adapted in other countries, including Chile [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This provides an added value for the Latin
American context [19]; however, its specific adaptability to the Mexican environment had not yet
been evaluated. The purpose is to evaluate digital teaching competencies and identify the
condition of the teachers at the Autonomous University of Chiapas (UNACH). Therefore, it/they
was subjected to various tests such as content, concurrent and construct validity, as well as
idiomatic and linguistic validity. By submitting COMDID-A to this process, we intend to provide it
with greater solidity and validity, to be an suitable tool to evaluate the digital competencies of
UNACH teachers [20,21].
      </p>
      <sec id="sec-1-1">
        <title>1.1. Digital competences and COMDID-A</title>
        <p>Most authors define digital competencies as an amalgam of skills, knowledge and skills which
enable individuals to employ digital technologies effectively and ethically. In the academic field,
these competencies are key for teachers to efficiently incorporate digital technologies into their
pedagogy, contributing in this way to raising the educational level [22]. To do this, it is necessary
to promote favorable attitudes toward the use of digital technologies and the ability to adapt to
constantly changing technological innovations.</p>
        <p>
          In order to assess the breadth and depth of digital competencies, multiple approaches to
measurement have emerged[
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15">10–15</xref>
          ]. Notably, self-perception stands out as an essential
instrument, since it allows teachers to consciously identify their own level of competence in this
field [
          <xref ref-type="bibr" rid="ref11">11,23,24</xref>
          ]. For the study case, and responding to the need to evaluate the self-perception of
digital teaching competence (CDD) of teachers of Universidad Autónoma de Chiapas, COMDID-A
becomes a reference [24]. This instrument is organized around four dimensions: D1. Didactic,
curricular, and methodological approach (6 items); D2. Planning, Organization and
Administration of Digital Technology Resources and Spaces (5 items); D3. Relations, Ethics and
Security (5 items); D4. Personal and Professional Development (6 items). Altogether, the
questionnaire includes 22 items that use a five-point Likert scale to determine different degrees
of CDD (non-initiated, beginner, intermediate, expert, and transformer)[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          This questionnaire has been applied in multiple contexts and there are several researches and
publications that address its implementation and validation [
          <xref ref-type="bibr" rid="ref16 ref17">16,17,25</xref>
          ] specifically, to validate
the factorial structure and validity of the construct. Palau et al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] Conducted a core component
analysis to simplify the dataset and identify the four dimensions; however, this was intended for
the European context. For the Latin American environment, there is a study in Chile, which
adapted and applied it through focus group [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. To address the need for a contextualized
evaluation of CDD and to have versions matching the language and sociocultural characteristics
of the Mexican population, this study focuses on the process of cross-cultural adaptation of the
COMDID-A instrument, developed in Spain.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>
        This study was conducted at Universidad Autónoma de Chiapas (UNACH) as a result of the need
to evaluate the CDD level, to develop a model of professional training in the institution. Initially,
a documentary review was conducted to identify the most appropriate instruments to measure
these competencies. Within this framework, COMDID-A has been chosen as an essential
evaluative instrument; developed by specialists in the field of educational technology at the
Universitat Rovira i Virgili in Tarragona, Spain [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This instrument stands out for its ability to
measure teachers’ self-perception in four key areas through 22 descriptors and four levels of
development, and is especially applicable for self-assessments in academic contexts [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The
goal is to evaluate digital skills to determine the current level among UNACH teachers.
      </p>
      <p>Valida&amp;on Stages
Stage 1: Transcultural Adapta%on</p>
      <p>Stage 2: Content Validity</p>
      <p>Opinion of3
Experts in the Area</p>
      <p>Iden4fica4on of
confusingques4ons.</p>
      <p>Propose
adapta4ons</p>
      <p>Linguis4c and
Idioma4c.</p>
      <p>Qualita4ve
Review of</p>
      <p>Assessment</p>
      <p>Organiza4on and
Classifica4on according
to COMDID
Integrate results in Visual</p>
      <p>Format (table)</p>
      <p>Terminology
Consistency and</p>
      <p>Accuracy
Integra4on
with cross
table</p>
      <p>Implementa4on of prac4cal
guide adapted by
EscobarPérez and Cuervo-MarHnez</p>
      <p>Adding Observa-on</p>
      <p>Sec-on
Integra4on in
Matrix</p>
      <p>Summary for each
Dimension. Screening for</p>
      <p>Item Adapta4on</p>
      <p>Digitaliza4on of Resultsand
Website for Distribu4on of</p>
      <p>Instrument
Selec4on of 16 Experts
and Instrument</p>
      <p>Implementa4on
Assessment ofJudge</p>
      <p>Consistency</p>
      <p>Data collec4on.</p>
      <p>Quan4ta4ve Validity
using Fleiss’ Kappa</p>
      <p>Iden%fy COMDID Items for Change</p>
      <p>
        The choice of this instrument was based on its multidimensional approach, which covers four
key aspects of teaching [26]. In addition, its adaptation in other countries, including Chile [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ],
adds additional value for its use in Latin American contexts [19]. However, its adaptability to the
Mexican environment has not yet been applied. Since the COMDID-A instrument originated in a
European context, its adaptation to the Mexican environment was imperative, culturally as well
as linguistically. To achieve this, the study was divided into two interconnected phases (see
Figure 1). The first stage focused on the adaptation and validation of the instrument, using the
COMDID-A rubric as proposed by Lázaro-Cantabrana et al. (2018). The adaptation of the
questionnaire to the Mexican environment was key. Two validation phases were conducted: The
first focused on cross-cultural adaptation (linguistic, cultural, and idiomatic) which involved the
participation of three experts in the educational and linguistic field. The second consisted of
content validity through the judgment of 16 experts, using the proposal of Escobar-Perez &amp;
Cuervo-Martinez [27]. This process made it possible to identify the dimensions that required
contextual adaptation, and the results were verified by Fleiss K.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Phase 1 Cross-cultural adaptation</title>
        <p>Ensuring the validity of an instrument is a constant concern among researchers. Over time,
validity has been interpreted in various ways and from different fields of study [28,29]. However,
it remains crucial for the choice and use of an instrument. Specifically, the question is whether
the instrument really evaluates what it intends to measure [30]. To achieve an adequate linguistic,
linguistic, and cultural equivalence in an instrument, it needs to be used in different cultural
contexts. There are theoretical proposals that serve as a guide and highlight the importance of
the transition from an instrument from one culture to another [30–32]. This transition is called
transcultural adaptation of the instrument [33]. While in Europe and in English-speaking
countries this process is widely valued, in Latin America it is sometimes not given the necessary
importance[30]. The lack of adequate procedures for translating and adapting the instruments
has led to some research being considered invalid. For COMDID-A to adapt properly to the
Mexican context, it was essential to have the opinion and experience of experts in the linguistic
and educational field [34]. It was decided to consult three recognized UNACH teachers with wide
experience in areas relevant to the study. Table 1 shows a brief description of each expert’s
experience and specialty:</p>
        <sec id="sec-2-1-1">
          <title>Experience</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Ph.D. in Philosophy and Educational Sciences</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>Universidad Complutense De Madrid</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>Professor of the Faculty of Humanities Campus VI, Universidad Autónoma de</title>
        </sec>
        <sec id="sec-2-1-5">
          <title>Chiapas. Expert teacher in Psychopedagogy and development of qualitative instruments.</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>PhD. in Contemporary Philosophy</title>
        </sec>
        <sec id="sec-2-1-7">
          <title>Benemérita Universidad Autónoma de Puebla (BUAP), México. Professor of the</title>
        </sec>
        <sec id="sec-2-1-8">
          <title>Faculty of Humanities, Campus VI, Universidad Autónoma de Chiapas. 16 years in teaching, expert in the phenomenology of education.</title>
        </sec>
        <sec id="sec-2-1-9">
          <title>PhD. D. in Education</title>
        </sec>
        <sec id="sec-2-1-10">
          <title>Campus Tuxtla, Universidad Autónoma de Chiapas.</title>
          <p>32 years in teaching, expert in Linguistics and Languages. Faculty member of the</p>
        </sec>
        <sec id="sec-2-1-11">
          <title>School of Language.</title>
          <p>An e-mail was sent to each of the specialists with a formal invitation to participate. The
COMDID-A instrument was attached to this e-mail requesting its analysis and possible proposals
for modification. The guidelines provided required that they focused on the linguistic and
idiomatic parts based on the following criteria: The first focused on identifying questions that
could generate confusion in the Mexican context, and the second on proposing adjustments for
them. After receiving their assessments, a meticulous review was conducted to discern which
modifications to incorporate and how to do so, ensuring the relevance and clarity of the
instrument in Mexico. The contribution of these experts not only strengthens the linguistic
validity of COMDID-A, but also guarantees its cultural adaptation and specificity in the Mexican
educational context. The reflections of each expert were integrated into a cross table. It is
important to mention that, although observations were scarce, they were essential. For example,
it was perceived that experts agreed to observe the same word twice, which could produce
misinterpretations in the context where the instrument would be applied.</p>
          <p>The proposal of Beaton et al. has been taken as a reference. [35] They established phases to
ensure that the adapted questionnaire is conceptually, idiomatically, semantically, and
operationally equivalent to the original. Having a Spanish version helped to ensure that the
translation process did not have any major problems; however, there were still discrepancies in
the language of the culture itself; therefore, the contribution of the experts helped to confront and
compare the versions to identify and resolve discrepancies. Consistency and precision were
sought in the terminology used, with respect to the original content of COMDID-A [35,36].</p>
          <p>To ensure fidelity in both language and concepts, a stage of adaptation to Mexican Spanish was
implemented through the judgment of experts who did not have prior knowledge of the
instrument in its original version. This made it possible to detect and correct possible deviations
in interpretation or conceptual meaning in the consolidated version of the instrument. The
participation of these experts not only reinforces the linguistic validity of COMDID-A, but also
ensures its adaptation to the culture and particularities of the educational field in Mexico.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Phase 2: Content Validity</title>
        <p>Every instrument must go through a validation process to ensure that it is valid, reliable and
that accurately measures what is proposed [37,38]. A validated instrument allows the
generalization of findings [37,38], improves the quality of the study, not only increases the
credibility of the study, but also facilitates efficient data collection and lays the foundation for
future studies [39]. To carry out the second phase of the investigation, we adapted the proposal
of Escobar-Perez and Cuervo-Martinez[27] , who establish a method for the judgment of experts
and that through a practical guide carries out the process of expert judgment that includes:
1. Prepare instructions and spreadsheets
2. Select the experts and train them
3. Explain the context
4. Enable discussion
5. Establish agreement between experts by calculating consistency.</p>
        <p>These steps are recommended by several authors [40,41], and are considered essential for
conducting an expert judgment effectively. A space for general observations was added to this
guide; this space ensured that the instrument was applicable to the given Mexican context. This
enriched what was already established in Phase 1. Subsequently, the instrument was digitized
using the LimeSurvey, software which helped its distribution and the data collection. This process
ended with the creation of the www.competenciadigital.com.mx website, which served as a
platform for managing the tool and the database.</p>
        <p>The next step was the thorough selection of expert judges, an essential component to ensure
the validity and reliability of the study. Following the best practices in the selection of judges
methodologically, it was decided to invite specialists with a solid academic background and a vast
experience in the field of educational technology [27,40,42,43]. In total, 16 experts from various
states of the Mexican Republic were added (see Table 2). The inclusion of experts from different
geographical areas and universities allowed for a more complete evaluation, addressing various
aspects of the subject in question. This selection strategy was based on rigorous methodologies
previously established by various authors [27,40,42,43], thus ensuring that the process was
aligned with high quality academic standards. This holistic approach not only strengthened the
validity of the study, but also set a precedent for future research in the field. Each expert was
contacted by email, being mostly members of the Inter-Institutional Committee for the Evaluation
of Higher Education (CIEES) with a specialty in Educational Technology. In the mail, a link to the
digitized instrument was provided, accompanied by a detailed protocol, the theory supporting
the instrument, a specific timeframe to complete the task, as well as clear definitions of the
evaluation criteria.</p>
        <p>After the end of the evaluation period, which lasted five months from the first contact, the data
obtained was thoroughly collected. To guarantee the quantitative validity of the COMDID-A
instrument, a mathematical analysis process of the collected data was implemented, and in this
framework, the Fleiss K coefficient emerged as a crucial statistical indicator. Fleiss K is a metric
that evaluates the degree of agreement between multiple judges or evaluators [44–46] . It is used
specifically to measure the consistency of the classifications awarded by different judges to the
same subjects [47,48] . A high value of the Fleiss K coefficient indicates a higher concordance
among the judges, which, in turn, reinforces the reliability of the instrument in question [46].</p>
        <p>To carry out this comprehensive quantitative analysis, a team of mathematics experts was
formed. This team was led by an academic recognized in the National System of Researchers
(SNI), level II and an advanced student of the Bachelor of Mathematics. The analysis focused on
the evaluation of four key dimensions: Didactics, Curriculum and Methodology; Planning,
Organization and Management of Digital Technological Spaces and Resources; Relational, Ethics
and Security; and Personal and Professional. Each dimension was examined under four aspects:
Clarity, sufficiency, coherence, and relevance. Each dimension-aspect pair included between 5
and 7 questions whose answers could be: High level, Moderate level, Low level or Does not meet
the criteria.</p>
        <p>This evaluation process gives rise to 16 matrices, which contain all the evaluations made by
the experts. The results obtained in the first matrix are presented below.</p>
        <p>The objective was to assess the level of concordance in the evaluations of the 16 judges. For
this purpose, two statistical coefficients were used: Kendall’s coefficient W of and Fleiss Kappa
coefficient. Considering that Kendall’s W was designed for ordinal trials, its adjustment was used
for repeated trials; however, the results were mostly non-significant, which led to its dismissal.</p>
        <p>The Fleiss Kappa coefficient was identified as the most suitable option for this study, especially
since the collected valuations are presented in nominal form. Two hypotheses were formulated:
The null hypothesis ($H_0$), which holds that there is no significant real agreement beyond
chance, and the alternative hypothesis ($H_1$), which states that the observed agreement is
statistically significant. The p-value was used to evaluate the evidence against the null hypothesis
and to determine the statistical significance of the agreement observed among the judges. In this
way, a summary matrix was constructed using the calculation of the Fleiss Kappa for each
dimension-aspect pair, providing an integral view of the results obtained.</p>
        <p>It is important to note that, with one exception, the results obtained are statistically significant,
since the corresponding p-values do not exceed the level of significance established in 0.05. To
better understand the level of agreement between the judges, we resort to the interpretation of
the Fleiss Kappa proposed by Altman (1991). Noting that the median of \( K \) values is 0,352,
Altman suggests that this reflects a generally weak level of agreement among judges. In other
words, there is a certain discrepancy in the evaluations conducted by the different judges. To
illustrate the level of agreement, we created Table 5 which we have called the "Frequency Table
of Agreement Levels according to the Fleiss Kappa":</p>
        <p>This table allows for a quick and effective visualization of how agreement levels are
distributed among judges. For example, it can be observed that most judges (9 out of 16) have a
"Weak" level of agreement, while only one reaches a "Good" level of agreement. This highlights
the need to review and possibly adjust the evaluation tool to improve consistency among judges.
Finally, after this rigorous process of evaluation and analysis, the final measurements were
conducted, and the corresponding results were obtained. These results will serve as a basis for
future research and methodological adjustments.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Discussion</title>
      <p>The research was able to adapt and validate the COMDID-A instrument for its application in
Mexico. The results show that the instrument is dependable and relevant to evaluate the digital
competencies of teachers at Universidad Autónoma de Chiapas. The method proposed by
Escobar-Pérez and Cuervo-Martinez was used to validate and adapt COMDID-A, [27],A comment
section was added to collect specific observations from the experts. This approach made it
possible to obtain qualitative data that enriched the qualitative phase of the research. The
comments of the experts were classified according to COMDID-A items and dimensions, which
helped to identify patterns and relevant coincidences in specific items.</p>
      <sec id="sec-3-1">
        <title>1.2 Processing of information 1.3 Management, analysis and creation of knowledge of information and creation of knowledge</title>
      </sec>
      <sec id="sec-3-2">
        <title>Use digital technologies to Use digital technologies to</title>
        <p>increase motivation and increase motivation and
facilitate learning for facilitate learning for
students with NEE students with NEI</p>
      </sec>
      <sec id="sec-3-3">
        <title>Design EA activities which Design teaching-learning</title>
        <p>involve the use of digital (EA) activities involving the
technologies use of digital technologies</p>
      </sec>
      <sec id="sec-3-4">
        <title>Serve as a model for other Be a reference for other professionals on the professionals on the responsible and safe use of responsible and safe use of digital technologies digital technologies</title>
        <p>Categories such as "Meaning" and "Structure" were established to organize the data. Within
"Structure," subcategories such as "Abbreviation" and "Grammar” were created. Items that
required changes were moved to "Change Formats," placing them in the corresponding categories
and subcategories. This meticulous process allowed to have a visual map of the items to be
modified. A relevant change was the adaptation of the term "Special Educational Needs (NEE, for
the Spanish acronyms)" to "Inclusive Education Needs (IND, for the Spanish acronyms)" for the
Mexican context. See table 4. This detailed approach ensured that the COMDID-A instrument was
well-founded and adapted to the Mexican context, ensuring its validity and reliability. A
concordance was observed between the qualitative responses and the dimensions evaluated. In
addition, tables were developed to improve the understanding of the changes made, highlighting
the adaptation of terms and parameters to measure teaching strategies.</p>
        <p>The qualitative results meet the objective of adapting and validating the instrument, since final
actions were determined for item changes based on qualitative and quantitative analysis (Fleiss
Kappa). The next task is to manage updating the instrument for its application in Mexico. See
Figure 2 showing the visual overview of the instrument items that are proposed for changes.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>
        The research focused on evaluating the digital competencies of teachers at Universidad
Autónoma de Chiapas. It began with a documentary review to identify appropriate instruments
to measure these competencies. The COMDID-A instrument developed by Lazaro-Cantabrana et
al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] in Spain was considered as the most appropriate; therefore, a cultural and linguistic
adaptation was required to be implemented in Mexico. To validate the instrument in the Mexican
context, two types of validations were conducted: Idiomatic and linguistic, and quantitative
through expert judgment. In the idiomatic validation, three experts were consulted to adapt the
instrument to the linguistic conditions of Mexico. For content validity, 16 experts in educational
technology were resorted to, and statistical methods such as Kendall's W coefficient and Fleiss's
Kappa coefficient were applied.
      </p>
      <p>It is essential to understand that the validation of an instrument is not an isolated process but
must consider its applicability in a specific context. This thorough approach ensures that the
instrument is both applicable and dependable in the Mexican context. The research not only seeks
to evaluate the digital competencies of teachers, but also to contribute to the body of knowledge
in the field of educational technology and teacher training in Mexico. This Mexican version of
COMDID-A can be considered equivalent to the original; it is linguistically, semantically, and
culturally adapted to the Mexican context. The authors are aware that any validation process is a
process that requires testing other types of validity to ensure the validity of the instrument.</p>
      <p>This research lays the foundations for future studies and the implementation of teacher
training strategies in digital competencies, aligned with the needs and context of the UNACH and
potentially applicable in other educational institutions in the country. That is to say, this method
improves efficiency in data collection and serves as a basis for future research, highlighting the
importance of its application in the various fields of study that should not be ignored.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>The authors thank the experts who participated voluntarily and anonymously in this study. This
study was partially funded by the Research Direction of the Universidad Peruana de Ciencias
Aplicadas (UPC).
[18] Lazáro Cantabrana JL, Gisbert Cervera M. Elaboración de una rúbrica para evaluar la
competencia digital del docente. Rev Ciéncies LÉducació 2015:19.
[19] Cisneros-Barahona AS, Marqués-Molias L, Samaniego-Erazo N, Mejía-Granizo CM. La
Competencia Digital Docente. Diseño y validación de una propuesta formativa. Pixel-Bit2
2023;68:7–41.
[20] Creswell JW. Research design: Qualitative, quantitative, and mixed methods approaches. 4th
ed. Thousands Oaks, CA: SAGE Publications, Inc.; 2014.
[21] Sireci SG. The construct of content validity. Soc Indic Res 1998;45:83–117.</p>
      <p>https://doi.org/10.1023/a:1006985528729.
[22] Paz Saavedra LE, Gisbert Cervera M, Usart Rodríguez M. Competencia digital docente, actitud
y uso de tecnologías digitales por parte de profesores universitarios. Pixel-Bit, Rev Medios y
Educ 2022:91–130. https://doi.org/10.12795/pixelbit.91652.
[23] Janssen J, Stoyanov S. Online Consultation on Experts "Views on Digital Competence 2012.
[24] Lázaro Cantabrana JL, Gisbert Cervera M. Elaboració d’una rúbrica per avaluar la
competència digital del docent. Univ Tarraconensis Rev Ciències l’Educació 2015;1:48.
https://doi.org/10.17345/ute.2015.1.648.
[25] Silva J, Usart M, Lázaro-Cantabrana J-L. Competencia digital docente en estudiantes de último
año de Pedagogía de Chile y Uruguay TT - Teacher’s digital competence among final year
Pedagogy students in Chile and Uruguay. Comunicar 2019;61:33–43.
[26] Lázaro Cantabrana JL. La competència digital docent com a eina per garantir la qualitat en
l’ús de les TIC en un centre escolar. vol. 1. 2015. https://doi.org/10.17345/ute.2015.1.667.
[27] Escobar-Pérez J, Cuervo-Martínez Á. Validez de contenido y juicio de expertos: Una
aproximación a su utilización [Content validity and expert judgement: An approach to their
use]. Av En Medición 2008;6:27–36.
[28] Aiken LR, Yang W, Soto M, Segovia L, Binomial P, Miller JM, et al. Diseño y validación de un
cuestionario para analizar la calidad en empleados de servicios deportivos públicos de las
mancomunidades de municipios extremeñas. Educ Psychol Meas 2011;7:181–92.
https://doi.org/10.1177/0013164412473825.
[29] Cho J. Validity in qualitative research revisited. Qual Res 2006;6:319–40.</p>
      <p>https://doi.org/10.1177/1468794106065006.
[30] Gallardo-Echenique E, Marqués Molias L, Gomez Cruz OD, De Lira Cruz R. Cross-cultural
adaptation and validation of the “student communication &amp; study habits” questionnaire to
the mexican context. Proc - 14th Lat Am Conf Learn Technol LACLO 2019 2019:104–9.
https://doi.org/10.1109/LACLO49268.2019.00027.
[31] Arribas A. Adaptación Transcultural de Instrumentos. Guía para el Proceso de Validación de</p>
      <p>Instrumentos Tipo Encuestas. Rev Científica La Asoc Médica Bahía Blanca 2006;16:74–82.
[32] Lira MT, Caballero E. Cross-Cultural Adaptation of Evaluation Instruments in Health: History
and Reflections of Why, How and When. Rev Medica Clin Las Condes 2020;31:85–94.
https://doi.org/10.1016/j.rmclc.2019.08.003.
[33] International Test Commission (ITC). ITC Guidelines for Translating and Adapting Tests. 2nd
ed. [Www.InTestCom.Org]: ITC; 2016.
[34] Cardoso Ribeiro C, Gómez-Conesa A, Hidalgo Montesinos MD. Metodología para la
adaptación de instrumentos de evaluación. Fisioterapia 2010;32:264–70.
https://doi.org/10.1016/j.ft.2010.05.001.
[35] Beaton D, Bombardier C, Guillemin F, Ferraz MB. Recommendations for the Cross-Cultural</p>
      <p>Adaptation of the DASH &amp; QuickDASH Outcome Measures. Toronto: 2007.
[36] Beaton D, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural
adaptation of self-report measures. Spine (Phila Pa 1976) 2000;25:3186–91.
https://doi.org/10.1097/00007632-200012150-00014.
[37] Sireci SG. The Construct of Content Validity. Soc Indic Res 1998;45:83–117.</p>
      <p>https://doi.org/10.1007/sl.
[38] Almanasreh E, Moles R, Chen TF. Evaluation of methods used for estimating content validity.</p>
      <p>Res Soc Adm Pharm 2019;15:214–21. https://doi.org/10.1016/J.SAPHARM.2018.03.066.
[39] Hambleton RK, Patsula L. Adapting tests for use in multiple languages and cultures. Soc Indic</p>
      <p>Res 1998;45:153–71. https://doi.org/10.1023/A:1006941729637.
[40] Skjong R, Wentworth BH. Expert Judgment and Risk Perception. Proc. Elev. Int. Offshore
Polar Eng. Conf., vol. IV, Stavanger, Norway: International Society of Offshore and Polar
Engineers; 2001, p. 537–44.
[41] de Arquer MI. Fiabilidad Humana: métodos de cuantificación, juicio de expertos. 1995.
[42] Cabero J, Llorente M del C. La aplicación del juicio de experto como técnica de evaluación de
las tecnologías de la información y comunicación (TIC) [The expert’s judgment application
as a technic evaluate information and communication technology (ICT)]. Eduweb Rev Tecnol
Inf y Comun En Educ 2013;7:11–22.
[43] Urrutia Egaña M, Barrios Araya S, Gutiérrez Núñez M, Mayorga Camus M. Métodos óptimos
para determinar validez de contenido. Rev Cuba Educ Medica Super 2015;28:547–58.
[44] Cerda Lorca J, Villarroel Del P. L. Evaluación de la concordancia inter-observador en
investigación pediátrica: Coeficiente de Kappa. Rev Chil Pediatr 2008;79:54–8.
https://doi.org/10.4067/s0370-41062008000100008.
[45] Torres J, Perera V. Cálculo de la fiabilidad y concordancia entre codificadores de un sistema
de categorías para el estudio del foro online en e-learning. Rev Investig Educ 2009;27:89–
103.
[46] Falotico R, Quatto P. Fleiss’ kappa statistic without paradoxes. Qual Quant 2015;49:463–70.</p>
      <p>https://doi.org/10.1007/s11135-014-0003-1.
[47] López A, Galparsoro DU, Fernández P. Medidas de concordancia : el índice de Kappa. Cad Aten</p>
      <p>Primaria 2001:2–6.
[48] Gwet KL. Large-Sample Variance of Fleiss Generalized Kappa. Educ Psychol Meas
2021;81:781–90. https://doi.org/10.1177/0013164420973080.
[49] Altman DG. Practical Statistics for Medical Research. 1991.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Molina</given-names>
            <surname>Montalvo</surname>
          </string-name>
          <string-name>
            <given-names>HI</given-names>
            ,
            <surname>Macías Villareal</surname>
          </string-name>
          <string-name>
            <surname>JC</surname>
          </string-name>
          ,
          <article-title>CEpeda Hernández AA</article-title>
          . Educación en tiempos de COVID-
          <volume>19</volume>
          :
          <article-title>Una aproximación a la realidad en México: experiencias y aportaciones</article-title>
          .
          <source>Comunicaci</source>
          . Ciudad de México:
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Fernández</given-names>
            <surname>Escárzaga</surname>
          </string-name>
          <string-name>
            <given-names>J</given-names>
            ,
            <surname>Gabriela</surname>
          </string-name>
          <string-name>
            <given-names>J</given-names>
            ,
            <surname>Varela</surname>
          </string-name>
          <string-name>
            <given-names>D</given-names>
            ,
            <surname>Lorena</surname>
          </string-name>
          <string-name>
            <given-names>P</given-names>
            ,
            <surname>Martínez</surname>
          </string-name>
          <string-name>
            <surname>M</surname>
          </string-name>
          .
          <article-title>De la educación presencial a la educación a distancia en época de pandemia por Covid 19</article-title>
          . Experiencias de los docentes.
          <source>Rev Electrónica Sobre Cuerpos Académicos y Grup Investig</source>
          <year>2020</year>
          ;
          <volume>7</volume>
          :
          <fpage>87</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Cruz-Aguayo</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hincapé</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodríguez</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Profesores</surname>
          </string-name>
          <article-title>a prueba: Claves para una evaluación docente exitosa</article-title>
          .
          <year>2020</year>
          . https://doi.org/10.18235/0002149.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Guerrero</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalman</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>La inserción de la tecnología en el aula: Estabilidad y procesos instituyentes en la práctica docente</article-title>
          .
          <source>Rev Bras Educ</source>
          <year>2010</year>
          ;
          <volume>15</volume>
          :
          <fpage>213</fpage>
          -
          <lpage>29</lpage>
          . https://doi.org/10.1590/S1413-24782010000200002.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Mendoza</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellodas</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ortiz</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puelles</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asnate</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zambrano</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Desafíos interdisciplinarios para los docentes en el aprendizaje virtual</article-title>
          .
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Balladares-Burgos</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valverde-Berrocoso</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>El modelo tecnopedagógico TPACK y su incidencia en la formación docente: una revisión de la literatura</article-title>
          .
          <source>RECIE Rev Caribeña Investig Educ</source>
          <year>2022</year>
          ;
          <volume>6</volume>
          :
          <fpage>63</fpage>
          -
          <lpage>72</lpage>
          . https://doi.org/10.32541/recie.
          <year>2022</year>
          .
          <year>v6i1</year>
          .
          <fpage>pp63</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Díaz</given-names>
            <surname>Chamorro CM. El Modelo Tpack Como Método Pedagógico Para El Desarrollo De Competencias Digitales En Los Docentes De La Unidad Educativa “Víctor Mideros</surname>
          </string-name>
          .”
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Esquerre</given-names>
            <surname>Ramos</surname>
          </string-name>
          <string-name>
            <surname>LA</surname>
          </string-name>
          ,
          <article-title>Pérez Azahuanche MÁ</article-title>
          .
          <article-title>Retos del desempeño docente en el siglo XXI: una visión del caso peruano</article-title>
          .
          <source>Rev Educ</source>
          <year>2021</year>
          ;
          <volume>45</volume>
          :
          <fpage>0</fpage>
          -
          <lpage>21</lpage>
          . https://doi.org/10.15517/revedu.v45i1.
          <fpage>43846</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Castañeda</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Esteve</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adell</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Why rethinking teaching competence for the digital world?</article-title>
          <source>Rev Educ a Distancia</source>
          <year>2018</year>
          :
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . https://doi.org/10.6018/red/56/6.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Agreda</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinojo</surname>
            <given-names>MA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sola</surname>
            <given-names>JM</given-names>
          </string-name>
          . Diseño y validación de un instrumento de evaluación de competencia digital docente.
          <source>Pixel-Bit, Rev Medios y Educ</source>
          <year>2016</year>
          :
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Ferrari</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>DIGCOMP</surname>
          </string-name>
          <article-title> : A Framework for Developing and Understanding Digital Competence in Europe</article-title>
          . Luxembourg:
          <year>2013</year>
          . https://doi.org/10.2788/52966.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Reixach</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrés</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ribes</surname>
            <given-names>JS</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gea-Sánchez</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>López</surname>
            <given-names>AÀ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cruañas</surname>
            <given-names>B</given-names>
          </string-name>
          , et al.
          <article-title>Measuring the Digital Skills of Catalan Health Care Professionals as a Key Step Toward a Strategic Training Plan: Digital Competence Test Validation Study</article-title>
          .
          <source>J Med Internet Res</source>
          <year>2022</year>
          ;
          <volume>24</volume>
          . https://doi.org/10.2196/38347.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Restrepo-Palacio</surname>
            <given-names>S</given-names>
          </string-name>
          , de María Segovia Cifuentes Y.
          <article-title>Design and validation of an instrument for the evaluation of digital competence in Higher Education</article-title>
          .
          <source>Ensaio</source>
          <year>2020</year>
          ;
          <volume>28</volume>
          :
          <fpage>932</fpage>
          -
          <lpage>61</lpage>
          . https://doi.org/10.1590/S0104-40362020002801877.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Zempoalteca Durán</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barragán López</surname>
            <given-names>JF</given-names>
          </string-name>
          , González
          <string-name>
            <surname>Martínez</surname>
            <given-names>J</given-names>
          </string-name>
          , Guzmán Flores T.
          <article-title>Teaching training in ICT and digital competences in Higher Education System</article-title>
          .
          <source>Apertura</source>
          <year>2017</year>
          ;
          <volume>9</volume>
          :
          <fpage>80</fpage>
          -
          <lpage>96</lpage>
          . https://doi.org/10.32870/ap.v9n1.
          <fpage>922</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Zubieta</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bautista</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Quijano</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <string-name>
            <surname>Aceptacio</surname>
          </string-name>
          <article-title>́ n de las TIC en la docencia : una tipologıa</article-title>
          ́ de los académicos de la UNAM.
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Lázaro-Cantabrana</surname>
            <given-names>JL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gisbert-Cervera</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Silva-Quiroz</surname>
            <given-names>JE</given-names>
          </string-name>
          .
          <article-title>Una rúbrica para evaluar la competencia digital del profesor universitario en el contexto latinoamericano</article-title>
          .
          <source>Edutec Rev Electrónica Tecnol Educ</source>
          <year>2018</year>
          :
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . https://doi.org/10.21556/edutec.
          <year>2018</year>
          .
          <volume>63</volume>
          .1091.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Palau</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Usart</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <article-title>Ucar Carnicero MJ</article-title>
          . La competencia digital de los docentes de los conservatorios. Estudio de autopercepción en España.
          <source>Rev Electron LEEME</source>
          <year>2019</year>
          :
          <fpage>24</fpage>
          -
          <lpage>41</lpage>
          . https://doi.org/10.7203/LEEME.44.15709.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>