=Paper=
{{Paper
|id=Vol-3691/paper8
|storemode=property
|title=Validation of the Digital Teaching Competence Questionnaire (COMDID-A) in the Mexican context
|pdfUrl=https://ceur-ws.org/Vol-3691/paper8.pdf
|volume=Vol-3691
|authors=Oscar Daniel Gómez-Cruz,María Concepción Villatoro-Cruz,Ricardo Miguel Maldonado-Domínguez,Eliana Gallardo-Echenique
|dblpUrl=https://dblp.org/rec/conf/cisetc/Gomez-CruzVMG23
}}
==Validation of the Digital Teaching Competence Questionnaire (COMDID-A) in the Mexican context==
Validation of the Digital Teaching Competence
Questionnaire (COMDID-A) in the Mexican context
Oscar Daniel Gómez-Cruz1, María Concepción Villatoro-Cruz2, Ricardo Miguel
Maldonado-Domínguez3 and Eliana Gallardo-Echenique4
1 Universidad del País Uninnova, Tuxtla Gutiérrez, Chiapas, 29060, México
2 Tecnológico Nacional de México/Instituto Tecnológico de Minatitlán, Minatitlán, Veracruz, 96848, México
3 Universidad Autónoma de Chiapas, Tuxtla Gutiérrez, Chiapas, 29050, México
4 Universidad Peruana de Ciencias Aplicadas, Lima 15023, Perú.
Abstract
We present a validation of the Digital Teaching Competence Questionnaire (COMDID-A), used to
measure the level of digital competencies of teachers at the Autonomous University of Chiapas. A
documentary review of the existing instruments to measure the digital competencies of teachers was
conducted. The COMDID-A instrument was selected due to its focus on the evaluation of teachers’ digital
competences in the university context, as well as its adaptability to different cultural and linguistic
contexts. Subsequently, a thorough analysis of the instrument was conducted, based on theoretical
references and the experience of experts in the area of education. Adjustments and modifications were
made to the original instrument to adapt it to the Mexican context and improve its relevance and
reliability. The results obtained indicate that the COMDID-A instrument is dependable and relevant for
its use in the Mexican context. Quantitative and qualitative analyzes show that the instrument is capable
of effectively measure digital competencies.
Keywords
Cross-cultural adaptation, expert judgment, content validity, digital competence1
1. Introduction
After the outbreak of the COVID-19 pandemic, the prevailing need to train teachers in digital skills
became evident [1]. Although the pandemic took many higher education institutions by surprise,
the incorporation of digital technologies into classrooms was already underway, although
insufficiently [2]. The emerging use of technologies to mitigate the effects of the global shutdown
revealed that, in many cases, the assessments conducted were not adequate to determine the
level of intervention required and the areas where the need for training is most critical [3]. This
scenario has resulted in numerous trainings which not meet the specific demands related to the
incorporation of technologies in the education of students [4]. In addition, the lack of a strategic
focus on teacher training has produced an unequal adoption of digital tools, which in turn affects
the quality of education [5]. Therefore, it is key not only to identify areas for improvement, but
also to develop a teacher training model that is comprehensive, flexible and adapted to the
specific context of each educational institution [6].
In this sense, training in digital skills must go beyond mere technical instruction; it must
incorporate pedagogical elements that allow teachers to effectively apply technologies in their
educational practice [7]. This holistic approach will not only improve the quality of teaching, but
will also contribute to a more inclusive and equitable education, preparing students for the
CISETC 2023: International Congress on Education and Technology in Sciences 2023, December 04–06, 2023,
Zacatecas, Mexico
oscargomez@uninnova.mx (O. D. Gómez-Cruz); maria.vc@minatitlan.tecnm.mx (M. C. Villatoro-Cruz);
ricardommd.rmd@gmail.com (R. M. Maldonado-Domínguez); eliana.gallardo@upc.edu.pe (E. Gallardo-Echenique)
https://orcid.org/0000-0002-5991-1306 (O. D. Gómez-Cruz); https://orcid.org/0000-0002-8986-6219 (M. C.
Villatoro-Cruz); https://orcid.org/0000-0002-8601-8425 (R. M. Maldonado-Domínguez); https://orcid.org/0000-
0002-8524-8595 (E. Gallardo-Echenique)
© 2023 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
CEUR
ht
tp:
//
ceur
-ws
.or
g
Works
hop I
SSN1613-
0073
Pr
oceedi
ngs
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
challenges of the 21st century [8]. To realize this comprehensive approach in teacher training, it
is key to have assessment tools that reflect these complexities [9]. In this regard, there are
different proposals to evaluate the level of digital competence, as well as various reference
models or performance standards [10–15]adopted by some Latin American countries.
In this context, we have chosen to use the Digital Teaching Competence Questionnaire
(COMDID-A) prepared by Lazaro-Cantabrana et al. [16] to evaluate the self-perception of Spanish
teachers [17]. This selection was based on their multidimensional approach, which encompasses
four key dimensions of teaching [18]. It encourages teacher self-reflection and autonomy,
provides instant feedback, contrasted with other instruments [10–15]. In addition, it has been
adapted in other countries, including Chile [16]. This provides an added value for the Latin
American context [19]; however, its specific adaptability to the Mexican environment had not yet
been evaluated. The purpose is to evaluate digital teaching competencies and identify the
condition of the teachers at the Autonomous University of Chiapas (UNACH). Therefore, it/they
was subjected to various tests such as content, concurrent and construct validity, as well as
idiomatic and linguistic validity. By submitting COMDID-A to this process, we intend to provide it
with greater solidity and validity, to be an suitable tool to evaluate the digital competencies of
UNACH teachers [20,21].
1.1. Digital competences and COMDID-A
Most authors define digital competencies as an amalgam of skills, knowledge and skills which
enable individuals to employ digital technologies effectively and ethically. In the academic field,
these competencies are key for teachers to efficiently incorporate digital technologies into their
pedagogy, contributing in this way to raising the educational level [22]. To do this, it is necessary
to promote favorable attitudes toward the use of digital technologies and the ability to adapt to
constantly changing technological innovations.
In order to assess the breadth and depth of digital competencies, multiple approaches to
measurement have emerged[10–15]. Notably, self-perception stands out as an essential
instrument, since it allows teachers to consciously identify their own level of competence in this
field [11,23,24]. For the study case, and responding to the need to evaluate the self-perception of
digital teaching competence (CDD) of teachers of Universidad Autónoma de Chiapas, COMDID-A
becomes a reference [24]. This instrument is organized around four dimensions: D1. Didactic,
curricular, and methodological approach (6 items); D2. Planning, Organization and
Administration of Digital Technology Resources and Spaces (5 items); D3. Relations, Ethics and
Security (5 items); D4. Personal and Professional Development (6 items). Altogether, the
questionnaire includes 22 items that use a five-point Likert scale to determine different degrees
of CDD (non-initiated, beginner, intermediate, expert, and transformer)[16].
This questionnaire has been applied in multiple contexts and there are several researches and
publications that address its implementation and validation [16,17,25] specifically, to validate
the factorial structure and validity of the construct. Palau et al. [17] Conducted a core component
analysis to simplify the dataset and identify the four dimensions; however, this was intended for
the European context. For the Latin American environment, there is a study in Chile, which
adapted and applied it through focus group [16]. To address the need for a contextualized
evaluation of CDD and to have versions matching the language and sociocultural characteristics
of the Mexican population, this study focuses on the process of cross-cultural adaptation of the
COMDID-A instrument, developed in Spain.
2. Methodology
This study was conducted at Universidad Autónoma de Chiapas (UNACH) as a result of the need
to evaluate the CDD level, to develop a model of professional training in the institution. Initially,
a documentary review was conducted to identify the most appropriate instruments to measure
these competencies. Within this framework, COMDID-A has been chosen as an essential
evaluative instrument; developed by specialists in the field of educational technology at the
Universitat Rovira i Virgili in Tarragona, Spain [16]. This instrument stands out for its ability to
measure teachers’ self-perception in four key areas through 22 descriptors and four levels of
development, and is especially applicable for self-assessments in academic contexts [17]. The
goal is to evaluate digital skills to determine the current level among UNACH teachers.
Valida&on Stages
Stage 1: Transcultural Adapta%on Stage 2: Content Validity
Opinion of 3 Linguis4c and Digitaliza4on of Results and
Idioma4c. Implementa4on of prac4cal
Experts in the Area guide adapted by Escobar- Website for Distribu4on of
Qualita4ve Instrument
Pérez and Cuervo-MarHnez
Iden4fica4on of Terminology Selec4on of 16 Experts
confusing ques4ons. Review of and Instrument
Consistency and Adding Observa-on
Propose Assessment Implementa4on
Accuracy Sec-on
adapta4ons
Assessment of Judge
Organiza4on and Integra4on Consistency
Classifica4on according with cross
to COMDID table
Integra4on in Matrix-
Data collec4on.
Summary for each
Quan4ta4ve Validity
Dimension. Screening for
Integrate results in Visual using Fleiss’ Kappa
Item Adapta4on
Format (table)
Iden%fy COMDID Items for Change
Figure 1: Validation phases
The choice of this instrument was based on its multidimensional approach, which covers four
key aspects of teaching [26]. In addition, its adaptation in other countries, including Chile [16],
adds additional value for its use in Latin American contexts [19]. However, its adaptability to the
Mexican environment has not yet been applied. Since the COMDID-A instrument originated in a
European context, its adaptation to the Mexican environment was imperative, culturally as well
as linguistically. To achieve this, the study was divided into two interconnected phases (see
Figure 1). The first stage focused on the adaptation and validation of the instrument, using the
COMDID-A rubric as proposed by Lázaro-Cantabrana et al. (2018). The adaptation of the
questionnaire to the Mexican environment was key. Two validation phases were conducted: The
first focused on cross-cultural adaptation (linguistic, cultural, and idiomatic) which involved the
participation of three experts in the educational and linguistic field. The second consisted of
content validity through the judgment of 16 experts, using the proposal of Escobar-Perez &
Cuervo-Martinez [27]. This process made it possible to identify the dimensions that required
contextual adaptation, and the results were verified by Fleiss K.
2.1. Phase 1 Cross-cultural adaptation
Ensuring the validity of an instrument is a constant concern among researchers. Over time,
validity has been interpreted in various ways and from different fields of study [28,29]. However,
it remains crucial for the choice and use of an instrument. Specifically, the question is whether
the instrument really evaluates what it intends to measure [30]. To achieve an adequate linguistic,
linguistic, and cultural equivalence in an instrument, it needs to be used in different cultural
contexts. There are theoretical proposals that serve as a guide and highlight the importance of
the transition from an instrument from one culture to another [30–32]. This transition is called
transcultural adaptation of the instrument [33]. While in Europe and in English-speaking
countries this process is widely valued, in Latin America it is sometimes not given the necessary
importance[30]. The lack of adequate procedures for translating and adapting the instruments
has led to some research being considered invalid. For COMDID-A to adapt properly to the
Mexican context, it was essential to have the opinion and experience of experts in the linguistic
and educational field [34]. It was decided to consult three recognized UNACH teachers with wide
experience in areas relevant to the study. Table 1 shows a brief description of each expert’s
experience and specialty:
Table 1
Participating experts
Expert Experience
E01 Ph.D. in Philosophy and Educational Sciences
Universidad Complutense De Madrid
Professor of the Faculty of Humanities Campus VI, Universidad Autónoma de
Chiapas. Expert teacher in Psychopedagogy and development of qualitative
instruments.
E02 PhD. in Contemporary Philosophy
Benemérita Universidad Autónoma de Puebla (BUAP), México. Professor of the
Faculty of Humanities, Campus VI, Universidad Autónoma de Chiapas. 16 years in
teaching, expert in the phenomenology of education.
E03 PhD. D. in Education
Campus Tuxtla, Universidad Autónoma de Chiapas.
32 years in teaching, expert in Linguistics and Languages. Faculty member of the
School of Language.
An e-mail was sent to each of the specialists with a formal invitation to participate. The
COMDID-A instrument was attached to this e-mail requesting its analysis and possible proposals
for modification. The guidelines provided required that they focused on the linguistic and
idiomatic parts based on the following criteria: The first focused on identifying questions that
could generate confusion in the Mexican context, and the second on proposing adjustments for
them. After receiving their assessments, a meticulous review was conducted to discern which
modifications to incorporate and how to do so, ensuring the relevance and clarity of the
instrument in Mexico. The contribution of these experts not only strengthens the linguistic
validity of COMDID-A, but also guarantees its cultural adaptation and specificity in the Mexican
educational context. The reflections of each expert were integrated into a cross table. It is
important to mention that, although observations were scarce, they were essential. For example,
it was perceived that experts agreed to observe the same word twice, which could produce
misinterpretations in the context where the instrument would be applied.
The proposal of Beaton et al. has been taken as a reference. [35] They established phases to
ensure that the adapted questionnaire is conceptually, idiomatically, semantically, and
operationally equivalent to the original. Having a Spanish version helped to ensure that the
translation process did not have any major problems; however, there were still discrepancies in
the language of the culture itself; therefore, the contribution of the experts helped to confront and
compare the versions to identify and resolve discrepancies. Consistency and precision were
sought in the terminology used, with respect to the original content of COMDID-A [35,36].
To ensure fidelity in both language and concepts, a stage of adaptation to Mexican Spanish was
implemented through the judgment of experts who did not have prior knowledge of the
instrument in its original version. This made it possible to detect and correct possible deviations
in interpretation or conceptual meaning in the consolidated version of the instrument. The
participation of these experts not only reinforces the linguistic validity of COMDID-A, but also
ensures its adaptation to the culture and particularities of the educational field in Mexico.
2.2. Phase 2: Content Validity
Every instrument must go through a validation process to ensure that it is valid, reliable and
that accurately measures what is proposed [37,38]. A validated instrument allows the
generalization of findings [37,38], improves the quality of the study, not only increases the
credibility of the study, but also facilitates efficient data collection and lays the foundation for
future studies [39]. To carry out the second phase of the investigation, we adapted the proposal
of Escobar-Perez and Cuervo-Martinez[27] , who establish a method for the judgment of experts
and that through a practical guide carries out the process of expert judgment that includes:
1. Prepare instructions and spreadsheets
2. Select the experts and train them
3. Explain the context
4. Enable discussion
5. Establish agreement between experts by calculating consistency.
These steps are recommended by several authors [40,41], and are considered essential for
conducting an expert judgment effectively. A space for general observations was added to this
guide; this space ensured that the instrument was applicable to the given Mexican context. This
enriched what was already established in Phase 1. Subsequently, the instrument was digitized
using the LimeSurvey, software which helped its distribution and the data collection. This process
ended with the creation of the www.competenciadigital.com.mx website, which served as a
platform for managing the tool and the database.
The next step was the thorough selection of expert judges, an essential component to ensure
the validity and reliability of the study. Following the best practices in the selection of judges
methodologically, it was decided to invite specialists with a solid academic background and a vast
experience in the field of educational technology [27,40,42,43]. In total, 16 experts from various
states of the Mexican Republic were added (see Table 2). The inclusion of experts from different
geographical areas and universities allowed for a more complete evaluation, addressing various
aspects of the subject in question. This selection strategy was based on rigorous methodologies
previously established by various authors [27,40,42,43], thus ensuring that the process was
aligned with high quality academic standards. This holistic approach not only strengthened the
validity of the study, but also set a precedent for future research in the field. Each expert was
contacted by email, being mostly members of the Inter-Institutional Committee for the Evaluation
of Higher Education (CIEES) with a specialty in Educational Technology. In the mail, a link to the
digitized instrument was provided, accompanied by a detailed protocol, the theory supporting
the instrument, a specific timeframe to complete the task, as well as clear definitions of the
evaluation criteria.
Table 2
Expert judges according to their university and home state
State University Amount
Guerrero Universidad Autónoma de Gerrero 1
Ciudad de México UNAM/UNITEC 1
Veracruz Universidad Veracruzana 1
Tamaulipas Regional Center for Teacher Training and Educational 1
Research
Universidad Autónoma de Tamaulipas 1
Sonora Universidad de Sonora 1
Chihuahua Universidad Tecnológica de Ciudad Juárez 1
Chiapas Universidad Autónoma de Chiapas 6
Universidad Pablo Guardado Chávez 2
Universidad del País INNOVA 1
After the end of the evaluation period, which lasted five months from the first contact, the data
obtained was thoroughly collected. To guarantee the quantitative validity of the COMDID-A
instrument, a mathematical analysis process of the collected data was implemented, and in this
framework, the Fleiss K coefficient emerged as a crucial statistical indicator. Fleiss K is a metric
that evaluates the degree of agreement between multiple judges or evaluators [44–46] . It is used
specifically to measure the consistency of the classifications awarded by different judges to the
same subjects [47,48] . A high value of the Fleiss K coefficient indicates a higher concordance
among the judges, which, in turn, reinforces the reliability of the instrument in question [46].
To carry out this comprehensive quantitative analysis, a team of mathematics experts was
formed. This team was led by an academic recognized in the National System of Researchers
(SNI), level II and an advanced student of the Bachelor of Mathematics. The analysis focused on
the evaluation of four key dimensions: Didactics, Curriculum and Methodology; Planning,
Organization and Management of Digital Technological Spaces and Resources; Relational, Ethics
and Security; and Personal and Professional. Each dimension was examined under four aspects:
Clarity, sufficiency, coherence, and relevance. Each dimension-aspect pair included between 5
and 7 questions whose answers could be: High level, Moderate level, Low level or Does not meet
the criteria.
This evaluation process gives rise to 16 matrices, which contain all the evaluations made by
the experts. The results obtained in the first matrix are presented below.
Table 3
Assessments made by experts
Aspect Competence
Assessm
ent G2Q00001[S G2Q00001[S G2Q00001[S G2Q00001[S G2Q00001[S G2Q00001[S
Questio Q005] Q006] Q007] Q008] Q009] Q010]
ns
Judge ID Level
4 2. Low 3. Moderate 3. Moderate 3. Moderate 3. Moderate 4. High
5 3. Moderate 3. Moderate 3. Moderate 3. Moderate 4. High 4. High
12 4. High 4. High 4. High 4. High 4. High 4. High
13 3. Moderate 4. High 4. High 4. High 4. High 3. Moderate
17 3. Moderate 4. High 3. Moderate 4. High 4. High 3. Moderate
18 4. High 4. High 4. High 4. High 4. High 2. Low
20 4. High 4. High 4. High 2. Low 3. Moderate 4. High
22 2. Low 3. Moderate 3. Moderate 3. Moderate 3. Moderate 2. Low
27 3. Moderate 4. High 4. High 2. Low 4. High 3. Moderate
31 3. Moderate 3. Moderate 3. Moderate 3. Moderate 3. Moderate 2. Low
32 3. Moderate 3. Moderate 3. Moderate 3. Moderate 4. High 3. Moderate
38 4. High 4. High 4. High 4. High 4. High 3. Moderate
40 4. High 4. High 3. Moderate 3. Moderate 3. Moderate 3. Moderate
42 4. High 4. High 4. High 4. High 4. High 4. High
44 4. High 3. Moderate 4. High 4. High 4. High 3. Moderate
45 2. Low 2. Low 2. Low 4. High 4. High 4. High
The objective was to assess the level of concordance in the evaluations of the 16 judges. For
this purpose, two statistical coefficients were used: Kendall’s coefficient W of and Fleiss Kappa
coefficient. Considering that Kendall’s W was designed for ordinal trials, its adjustment was used
for repeated trials; however, the results were mostly non-significant, which led to its dismissal.
The Fleiss Kappa coefficient was identified as the most suitable option for this study, especially
since the collected valuations are presented in nominal form. Two hypotheses were formulated:
The null hypothesis ($H_0$), which holds that there is no significant real agreement beyond
chance, and the alternative hypothesis ($H_1$), which states that the observed agreement is
statistically significant. The p-value was used to evaluate the evidence against the null hypothesis
and to determine the statistical significance of the agreement observed among the judges. In this
way, a summary matrix was constructed using the calculation of the Fleiss Kappa for each
dimension-aspect pair, providing an integral view of the results obtained.
Table 4
Summary matrix of the results of the Fliess Kappa for each dimension-aspect
Dimension Clarity Competence Coherence Relevance
p valor K p valor K p valor K p valor K
D1. Didactic, curricular, and 0.000 0.227 0.000 0.305 0.000 0.223 0.000 0.352
methodological
D2. Planning, organization and 0.000 0.446 0.000 0.433 0.008 0.229 0.000 0.563
management of digital
technological spaces and
resources
D3. Relational, ethics and 0.000 0.425 0.633 0.032 0.000 0.352 0.000 0.304
security
D4. Personal and professional 0.000 0.446 0.000 0.289 0.000 0.269 0.000 0.671
It is important to note that, with one exception, the results obtained are statistically significant,
since the corresponding p-values do not exceed the level of significance established in 0.05. To
better understand the level of agreement between the judges, we resort to the interpretation of
the Fleiss Kappa proposed by Altman (1991). Noting that the median of \( K \) values is 0,352,
Altman suggests that this reflects a generally weak level of agreement among judges. In other
words, there is a certain discrepancy in the evaluations conducted by the different judges. To
illustrate the level of agreement, we created Table 5 which we have called the "Frequency Table
of Agreement Levels according to the Fleiss Kappa":
Table 5
Frequency Table of Agreement Levels according to Fleiss Kappa
Agreement Level Frequency
Poor 0
Weak 9
Moderate 5
Good 1
Very Good 0
This table allows for a quick and effective visualization of how agreement levels are
distributed among judges. For example, it can be observed that most judges (9 out of 16) have a
"Weak" level of agreement, while only one reaches a "Good" level of agreement. This highlights
the need to review and possibly adjust the evaluation tool to improve consistency among judges.
Finally, after this rigorous process of evaluation and analysis, the final measurements were
conducted, and the corresponding results were obtained. These results will serve as a basis for
future research and methodological adjustments.
3. Results and Discussion
The research was able to adapt and validate the COMDID-A instrument for its application in
Mexico. The results show that the instrument is dependable and relevant to evaluate the digital
competencies of teachers at Universidad Autónoma de Chiapas. The method proposed by
Escobar-Pérez and Cuervo-Martinez was used to validate and adapt COMDID-A, [27],A comment
section was added to collect specific observations from the experts. This approach made it
possible to obtain qualitative data that enriched the qualitative phase of the research. The
comments of the experts were classified according to COMDID-A items and dimensions, which
helped to identify patterns and relevant coincidences in specific items.
Table 6
Table Sample items for change
New New COMDID Item with observation Adapted Item
category subcategory Subcategory
Meaning N/A 1.3 1.2 Processing of information 1.3 Management, analysis
and creation of knowledge of information and creation
of knowledge
1.4 (item 1) Use digital technologies to Use digital technologies to
increase motivation and increase motivation and
facilitate learning for facilitate learning for
students with NEE students with NEI
Structure Abbreviation 1.1 (item 1) Design EA activities which Design teaching-learning
involve the use of digital (EA) activities involving the
technologies use of digital technologies
Grammar 3.1 (item 3) Serve as a model for other Be a reference for other
professionals on the professionals on the
responsible and safe use of responsible and safe use of
digital technologies digital technologies
Categories such as "Meaning" and "Structure" were established to organize the data. Within
"Structure," subcategories such as "Abbreviation" and "Grammar” were created. Items that
required changes were moved to "Change Formats," placing them in the corresponding categories
and subcategories. This meticulous process allowed to have a visual map of the items to be
modified. A relevant change was the adaptation of the term "Special Educational Needs (NEE, for
the Spanish acronyms)" to "Inclusive Education Needs (IND, for the Spanish acronyms)" for the
Mexican context. See table 4. This detailed approach ensured that the COMDID-A instrument was
well-founded and adapted to the Mexican context, ensuring its validity and reliability. A
concordance was observed between the qualitative responses and the dimensions evaluated. In
addition, tables were developed to improve the understanding of the changes made, highlighting
the adaptation of terms and parameters to measure teaching strategies.
The qualitative results meet the objective of adapting and validating the instrument, since final
actions were determined for item changes based on qualitative and quantitative analysis (Fleiss
Kappa). The next task is to manage updating the instrument for its application in Mexico. See
Figure 2 showing the visual overview of the instrument items that are proposed for changes.
Figure 2: Visual overview of items proposed in return
4. Conclusions
The research focused on evaluating the digital competencies of teachers at Universidad
Autónoma de Chiapas. It began with a documentary review to identify appropriate instruments
to measure these competencies. The COMDID-A instrument developed by Lazaro-Cantabrana et
al. [16] in Spain was considered as the most appropriate; therefore, a cultural and linguistic
adaptation was required to be implemented in Mexico. To validate the instrument in the Mexican
context, two types of validations were conducted: Idiomatic and linguistic, and quantitative
through expert judgment. In the idiomatic validation, three experts were consulted to adapt the
instrument to the linguistic conditions of Mexico. For content validity, 16 experts in educational
technology were resorted to, and statistical methods such as Kendall's W coefficient and Fleiss's
Kappa coefficient were applied.
It is essential to understand that the validation of an instrument is not an isolated process but
must consider its applicability in a specific context. This thorough approach ensures that the
instrument is both applicable and dependable in the Mexican context. The research not only seeks
to evaluate the digital competencies of teachers, but also to contribute to the body of knowledge
in the field of educational technology and teacher training in Mexico. This Mexican version of
COMDID-A can be considered equivalent to the original; it is linguistically, semantically, and
culturally adapted to the Mexican context. The authors are aware that any validation process is a
process that requires testing other types of validity to ensure the validity of the instrument.
This research lays the foundations for future studies and the implementation of teacher
training strategies in digital competencies, aligned with the needs and context of the UNACH and
potentially applicable in other educational institutions in the country. That is to say, this method
improves efficiency in data collection and serves as a basis for future research, highlighting the
importance of its application in the various fields of study that should not be ignored.
Acknowledgements
The authors thank the experts who participated voluntarily and anonymously in this study. This
study was partially funded by the Research Direction of the Universidad Peruana de Ciencias
Aplicadas (UPC).
References
[1] Molina Montalvo HI, Macías Villareal JC, CEpeda Hernández AA. Educación en tiempos de
COVID-19: Una aproximación a la realidad en México: experiencias y aportaciones.
Comunicaci. Ciudad de México: 2022.
[2] Fernández Escárzaga J, Gabriela J, Varela D, Lorena P, Martínez M. De la educación presencial
a la educación a distancia en época de pandemia por Covid 19. Experiencias de los docentes.
Rev Electrónica Sobre Cuerpos Académicos y Grup Investig 2020;7:87–110.
[3] Cruz-Aguayo Y, Hincapé D, Rodríguez C. Profesores a prueba: Claves para una evaluación
docente exitosa. 2020. https://doi.org/10.18235/0002149.
[4] Guerrero I, Kalman J. La inserción de la tecnología en el aula: Estabilidad y procesos
instituyentes en la práctica docente. Rev Bras Educ 2010;15:213–29.
https://doi.org/10.1590/S1413-24782010000200002.
[5] Mendoza R, Bellodas M, Ortiz C, Puelles L, Asnate E, Zambrano J. Desafíos interdisciplinarios
para los docentes en el aprendizaje virtual. 2023.
[6] Balladares-Burgos J, Valverde-Berrocoso J. El modelo tecnopedagógico TPACK y su
incidencia en la formación docente: una revisión de la literatura. RECIE Rev Caribeña Investig
Educ 2022;6:63–72. https://doi.org/10.32541/recie.2022.v6i1.pp63-72.
[7] Díaz Chamorro CM. El Modelo Tpack Como Método Pedagógico Para El Desarrollo De
Competencias Digitales En Los Docentes De La Unidad Educativa “Víctor Mideros.” 2023.
[8] Esquerre Ramos LA, Pérez Azahuanche MÁ. Retos del desempeño docente en el siglo XXI: una
visión del caso peruano. Rev Educ 2021;45:0–21.
https://doi.org/10.15517/revedu.v45i1.43846.
[9] Castañeda L, Esteve F, Adell J. Why rethinking teaching competence for the digital world? Rev
Educ a Distancia 2018:1–20. https://doi.org/10.6018/red/56/6.
[10] Agreda M, Hinojo MA, Sola JM. Diseño y validación de un instrumento de evaluación de
competencia digital docente. Pixel-Bit, Rev Medios y Educ 2016:39–46.
[11] Ferrari A. DIGCOMP : A Framework for Developing and Understanding Digital Competence
in Europe. Luxembourg: 2013. https://doi.org/10.2788/52966.
[12] Reixach E, Andrés E, Ribes JS, Gea-Sánchez M, López AÀ, Cruañas B, et al. Measuring the
Digital Skills of Catalan Health Care Professionals as a Key Step Toward a Strategic Training
Plan: Digital Competence Test Validation Study. J Med Internet Res 2022;24.
https://doi.org/10.2196/38347.
[13] Restrepo-Palacio S, de María Segovia Cifuentes Y. Design and validation of an instrument for
the evaluation of digital competence in Higher Education. Ensaio 2020;28:932–61.
https://doi.org/10.1590/S0104-40362020002801877.
[14] Zempoalteca Durán B, Barragán López JF, González Martínez J, Guzmán Flores T. Teaching
training in ICT and digital competences in Higher Education System. Apertura 2017;9:80–96.
https://doi.org/10.32870/ap.v9n1.922.
[15] Zubieta J, Bautista T, Quijano A. Aceptació n de las TIC en la docencia : una tipologı́a de los
acadé micos de la UNAM. 2012.
[16] Lázaro-Cantabrana JL, Gisbert-Cervera M, Silva-Quiroz JE. Una rúbrica para evaluar la
competencia digital del profesor universitario en el contexto latinoamericano. Edutec Rev
Electrónica Tecnol Educ 2018:1–14. https://doi.org/10.21556/edutec.2018.63.1091.
[17] Palau R, Usart M, Ucar Carnicero MJ. La competencia digital de los docentes de los
conservatorios. Estudio de autopercepción en España. Rev Electron LEEME 2019:24–41.
https://doi.org/10.7203/LEEME.44.15709.
[18] Lazáro Cantabrana JL, Gisbert Cervera M. Elaboración de una rúbrica para evaluar la
competencia digital del docente. Rev Ciéncies LÉducació 2015:19.
[19] Cisneros-Barahona AS, Marqués-Molias L, Samaniego-Erazo N, Mejía-Granizo CM. La
Competencia Digital Docente. Diseño y validación de una propuesta formativa. Pixel-Bit2
2023;68:7–41.
[20] Creswell JW. Research design: Qualitative, quantitative, and mixed methods approaches. 4th
ed. Thousands Oaks, CA: SAGE Publications, Inc.; 2014.
[21] Sireci SG. The construct of content validity. Soc Indic Res 1998;45:83–117.
https://doi.org/10.1023/a:1006985528729.
[22] Paz Saavedra LE, Gisbert Cervera M, Usart Rodríguez M. Competencia digital docente, actitud
y uso de tecnologías digitales por parte de profesores universitarios. Pixel-Bit, Rev Medios y
Educ 2022:91–130. https://doi.org/10.12795/pixelbit.91652.
[23] Janssen J, Stoyanov S. Online Consultation on Experts "Views on Digital Competence 2012.
[24] Lázaro Cantabrana JL, Gisbert Cervera M. Elaboració d’una rúbrica per avaluar la
competència digital del docent. Univ Tarraconensis Rev Ciències l’Educació 2015;1:48.
https://doi.org/10.17345/ute.2015.1.648.
[25] Silva J, Usart M, Lázaro-Cantabrana J-L. Competencia digital docente en estudiantes de último
año de Pedagogía de Chile y Uruguay TT - Teacher’s digital competence among final year
Pedagogy students in Chile and Uruguay. Comunicar 2019;61:33–43.
[26] Lázaro Cantabrana JL. La competència digital docent com a eina per garantir la qualitat en
l’ús de les TIC en un centre escolar. vol. 1. 2015. https://doi.org/10.17345/ute.2015.1.667.
[27] Escobar-Pérez J, Cuervo-Martínez Á. Validez de contenido y juicio de expertos: Una
aproximación a su utilización [Content validity and expert judgement: An approach to their
use]. Av En Medición 2008;6:27–36.
[28] Aiken LR, Yang W, Soto M, Segovia L, Binomial P, Miller JM, et al. Diseño y validación de un
cuestionario para analizar la calidad en empleados de servicios deportivos públicos de las
mancomunidades de municipios extremeñas. Educ Psychol Meas 2011;7:181–92.
https://doi.org/10.1177/0013164412473825.
[29] Cho J. Validity in qualitative research revisited. Qual Res 2006;6:319–40.
https://doi.org/10.1177/1468794106065006.
[30] Gallardo-Echenique E, Marqués Molias L, Gomez Cruz OD, De Lira Cruz R. Cross-cultural
adaptation and validation of the “student communication & study habits” questionnaire to
the mexican context. Proc - 14th Lat Am Conf Learn Technol LACLO 2019 2019:104–9.
https://doi.org/10.1109/LACLO49268.2019.00027.
[31] Arribas A. Adaptación Transcultural de Instrumentos. Guía para el Proceso de Validación de
Instrumentos Tipo Encuestas. Rev Científica La Asoc Médica Bahía Blanca 2006;16:74–82.
[32] Lira MT, Caballero E. Cross-Cultural Adaptation of Evaluation Instruments in Health: History
and Reflections of Why, How and When. Rev Medica Clin Las Condes 2020;31:85–94.
https://doi.org/10.1016/j.rmclc.2019.08.003.
[33] International Test Commission (ITC). ITC Guidelines for Translating and Adapting Tests. 2nd
ed. [Www.InTestCom.Org]: ITC; 2016.
[34] Cardoso Ribeiro C, Gómez-Conesa A, Hidalgo Montesinos MD. Metodología para la
adaptación de instrumentos de evaluación. Fisioterapia 2010;32:264–70.
https://doi.org/10.1016/j.ft.2010.05.001.
[35] Beaton D, Bombardier C, Guillemin F, Ferraz MB. Recommendations for the Cross-Cultural
Adaptation of the DASH & QuickDASH Outcome Measures. Toronto: 2007.
[36] Beaton D, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural
adaptation of self-report measures. Spine (Phila Pa 1976) 2000;25:3186–91.
https://doi.org/10.1097/00007632-200012150-00014.
[37] Sireci SG. The Construct of Content Validity. Soc Indic Res 1998;45:83–117.
https://doi.org/10.1007/sl.
[38] Almanasreh E, Moles R, Chen TF. Evaluation of methods used for estimating content validity.
Res Soc Adm Pharm 2019;15:214–21. https://doi.org/10.1016/J.SAPHARM.2018.03.066.
[39] Hambleton RK, Patsula L. Adapting tests for use in multiple languages and cultures. Soc Indic
Res 1998;45:153–71. https://doi.org/10.1023/A:1006941729637.
[40] Skjong R, Wentworth BH. Expert Judgment and Risk Perception. Proc. Elev. Int. Offshore
Polar Eng. Conf., vol. IV, Stavanger, Norway: International Society of Offshore and Polar
Engineers; 2001, p. 537–44.
[41] de Arquer MI. Fiabilidad Humana: métodos de cuantificación, juicio de expertos. 1995.
[42] Cabero J, Llorente M del C. La aplicación del juicio de experto como técnica de evaluación de
las tecnologías de la información y comunicación (TIC) [The expert’s judgment application
as a technic evaluate information and communication technology (ICT)]. Eduweb Rev Tecnol
Inf y Comun En Educ 2013;7:11–22.
[43] Urrutia Egaña M, Barrios Araya S, Gutiérrez Núñez M, Mayorga Camus M. Métodos óptimos
para determinar validez de contenido. Rev Cuba Educ Medica Super 2015;28:547–58.
[44] Cerda Lorca J, Villarroel Del P. L. Evaluación de la concordancia inter-observador en
investigación pediátrica: Coeficiente de Kappa. Rev Chil Pediatr 2008;79:54–8.
https://doi.org/10.4067/s0370-41062008000100008.
[45] Torres J, Perera V. Cálculo de la fiabilidad y concordancia entre codificadores de un sistema
de categorías para el estudio del foro online en e-learning. Rev Investig Educ 2009;27:89–
103.
[46] Falotico R, Quatto P. Fleiss’ kappa statistic without paradoxes. Qual Quant 2015;49:463–70.
https://doi.org/10.1007/s11135-014-0003-1.
[47] López A, Galparsoro DU, Fernández P. Medidas de concordancia : el índice de Kappa. Cad Aten
Primaria 2001:2–6.
[48] Gwet KL. Large-Sample Variance of Fleiss Generalized Kappa. Educ Psychol Meas
2021;81:781–90. https://doi.org/10.1177/0013164420973080.
[49] Altman DG. Practical Statistics for Medical Research. 1991.