=Paper=
{{Paper
|id=Vol-2797/paper6
|storemode=property
|title=Regional E-Participation Portals Evaluation: Preliminary Results from Russia
|pdfUrl=https://ceur-ws.org/Vol-2797/paper6.pdf
|volume=Vol-2797
|authors=Andrei Chugunov,Yury Kabanov,Georgy Panfilov
|dblpUrl=https://dblp.org/rec/conf/egov/ChugunovKP20
}}
==Regional E-Participation Portals Evaluation: Preliminary Results from Russia==
Regional E-Participation Portals Evaluation:
Preliminary Results from Russia
Andrei Chugunov*, Yury Kabanov**, Georgy Panfilov***
*ITMO University, St. Petersburg, Russia, chugunov@itmo.ru
**(1) National Research University Higher School of Economics; (2) ITMO University, St. Petersburg,
Russia, ykabanov@hse.ru
***ITMO University, St. Petersburg, Russia, panfilovgeorg@mail.ru
Abstract: This ongoing research presents the framework and preliminary results of the regional
e-participation portals evaluation in the regions of Russia. Based on the system approach in
Political Science and taking into account the Russian context, we have developed the framework
that allows the evaluation of various e-participation tools, as well as cross-regional and cross-
platform comparisons alongside the key stages of the e-participation process: the input, "black
box", output and feedback. The framework was applied to evaluate 205 e-participation portals in
85 Russian regions, representing six different types of e-participation options. Findings suggest
substantial discrepancies in the development of e-participation in Russia. The advantages and
implications of this framework for further analysis are discussed.
Keywords: E-Participation, Russian regions, Evaluation, Ranking, System approach
Acknowledgement: The research has been supported by the Russian Scientific Fund (RSF) as part
-18-00360 «E-participation as Politics and Public Policy Dynamic Factor».
1. Introduction
Evaluation is an important, though challenging aspect in e-participation research. A lot of
methodologies have been developed to provide the assessment of e-participation (e.g. see Garcia et
al. 2005; Tambouris et al. 2007; Fedotova et al. 2012). Yet, the techniques vary significantly due to
different theoretical underpinnings (Panapolou et al. 2008) and usually lack validity because of the
inability to consider contextual factors (Sundberg 2018), and, as argued by Kubicek and Aichholzer
of multichannel participation processes" (p. 23). This "evaluation gap" is further widened by a
variety of e-tools, as the taxonomy of e-participation is continuously expanding (Bohman 2014).
This ongoing paper attempts to address these challenges, providing an evaluation framework
that deals with the variety of e-participation tools. Based on the system approach in Political Science,
founded by Easton (1957), as well as previous research on e-participation evaluation in Russia
Copyright ©2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
72 Ongoing Research
(Chugunov & Kabanov 2018; Vidiasova et al. 2016) we present a new methodology that stresses the
process of e-participation. The evaluation methodology is designed in a way so it can be adapted to
other national and regional contexts, hence can be of interest for both scholars and practitioners. At
the same time, it considers the Russian context to get a more nuanced view on the development of
e-participation in the regions of Russia. In this ongoing paper, we first overview the methodology
and evaluation framework, and then present some of the findings, followed by the discussion of
results, shortcomings, and future steps.
2. Background: Challenges of E-Participation Evaluation
There are plenty of evaluation techniques of e-government and e-participation applications, yet
there are sufficient discrepancies between them (Berntzen & Olsen 2009). The first challenge here is
related to the methodology used. For instance, Machintosh and Whyte (2008) proposed a coherent
model including democratic, project and socio-technical layers. Yet, this set of layers is not exclusive,
and there is a variety of other criteria used (Panopolou et. al 2008), further expanded by the diversity
of methods, ranging from quantitative analysis (Fedotova et al. 2012; Vidiasova et al. 2016) to expert
surveys and qualitative assessment (Sá et al. 2016).
The second challenge comes from the multiple channels citizens may use in public policy
(Kubicek and Aichholzer 2016), with all these tools requiring their own criteria for assessment.
Usually scholars pay attention to certain e-participation technologies, like government websites
(Xenaxis & Loukis 2010; Freeney & Brown 2017) or social media accounts (Elsherif & Azab 2019). As
the variety of e-tools is rising (Bohman 2014), it is crucial to provide unified criteria in which to
evaluate them.
The third challenge is related to the fact that the evaluation of e-participation initiatives should
not concentrate on the readiness of the websites on which they are hosted as a goal on its own right:
rather this assessment should focus on the substantial outcomes of e-participation for the public and
policy-making. For example, Vidiasova et al. (2018) in their evaluation methodology, stress the social
efficiency of e-participation, while Chugunov & Kabanov (2018) highlight the importance of the e-
participation institutionalization and institutional design.
Finally, there is a challenge of inclusion country-specific contextual factors: on the one hand, the
evaluation framework should be rather general and universal to allow cross-country comparisons
(Berntzen & Olsen 2009), yet, more emphasis on a particular social and political context can help to
provide a more nuanced view on a particular e-participation practice and thus be more relevant for
decision - makers. Scholars emphasize the impact of various institutional and policy variables on the
development of e-government and e-participation (Gulati & Yates 2011), including the level of
decentralization in e-tools implementation (Kassen 2015).
The Russian case is very peculiar in this regard. Russia is a federal state with a high level of power
centralization (Busygina 2018). Thus, the development of regional e-participation in Russia is closely
connected to the federal initiatives. From 2012 onwards, the federal government has paid special
attention to the implementation of regional online services, from complaint mechanisms to
Ongoing Research 73
participatory budgeting. Yet the centralization of the policy has not led to the elimination of various
divides among the regions, and the latter still perform differently in e-participation development
(Kabanov & Sungurov 2016; Chugunov & Kabanov 2018). As a result, currently there are basically
six major e-participation tools used in the Russian regions, namely: 1) initiative / participatory
budgeting; (2) open budget; (3) e-petitions; (4) crowdsourcing; (5) complaint mechanisms and (6) e-
voting. But they neither can be found in all 85 regions, nor they have the same quality of
implementation and performance. It is therefore important to estimate these discrepancies of
outcomes amidst a relatively centralized policy.
In brief, while there are numerous attempts to evaluate e-participation, the development of a
unified technique is far from complete. The new methodology should not only be based on a clear
theoretical and methodological framework, but also tackle the e-participation tools diversity,
provide a clear assessment of the difference that e-participation makes to political and policy
processes, as well as to balance between a general approach allowing cross-contexts comparisons
and a more nuanced context-related view.
3. Evaluation Methodology: Framework and Procedure
The methodology proposed here is based on the previous technique developed by Chugunov &
Kabanov (2018). Though it allowed to range the regions according to the level of e-participation
institutionalization, its scope is quite limited, especially in terms of the tools analyzed. Yet, regional
e-participation in Russia is much more diverse, hence the new methodology encompasses the six
types of e-participation: 1) initiative / participatory budgeting; (2) open budget; (3) e-petitions; (4)
crowdsourcing; (5) complaint mechanisms and (6) e-voting.
Our methodology stresses the importance of e-participation as a facilitator of the normal public
policy process on its various stages (Coelho et al. 2017; Scherer & Wimmer 2011). In line with this
process orientation, we take a broader vision of the political process, derived from the system
approach in politics. As was argued by Easton (1957), there are some crucial elements of the political
system: (1) the system itself (the "black box"), (2) the inputs (demands and support from citizens);
(3) the outputs (decisions, policies); and (4) the feedback (inputs - outputs correspondence) (Easton
1957). However simplified this vision might be, we argue it still portrays the basic elements of the
political process accurately, and in fact, reflects the essentials of the e-participation workflow.
Citizens formulate a demand (complain, petition, etc.) and submit it to the "black box", where the
system processes the request and provides an output - a certain policy or action. This stimulates the
"feedback", and, if necessary, another input.
Each of these four concepts corresponds to a criterion. These criteria include: (1) openness for the
"black box", i.e. how open, transparent and comprehensive the available information about the e-
participation process is; (2) availability for the input, i.e. how e-participation allows the access for
various groups of citizens; (3) decision-making capacity for the output, i.e. the availability of the
information related to the outcomes of e-participation; (4) feedback quality, denoting the spectrum
of opportunities for the citizens to give feedback on e-participation results. These criteria are
74 Ongoing Research
universal for all six types of e-participation tools under analysis. Additionally, a fifth criterion -
Specific requirements - was developed to evaluate unique features of each type.
Every criterion was then decomposed to 3 indicators. The selection of this indicators has been
based both on the previous studies (Chugunov & Kabanov 2018) and after a series of consultations
with the experts on e-government and e-participation. So, the methodology consists of 15 indicators
in general (Table 1). Each indicator is evaluated from 0 to 2 points: "0" - the indicator is not present,
"1" - the indicator is partially present and "2" - the indicator is fully present. Hence for each e-
participation tool a region could get a maximum of 30 points (22 points for open budget portals).
The evaluation of the Russian e-participation portals was carried out in December 2019. We found
and evaluate overall 205 Internet resources, attributed to one of the six types of e-participation. Seven
online resources were then excluded from the analysis, as they had not been updated for one year
and hence were considered irrelevant.
Table 1: E-Participation Evaluation Framework. Source: Authors' Elaboration
Concept Criterion Indicator & Explanation
"Black box" Openness 1.1. Topicality: Has the website been updated within the last
month?
1.2. Information about Responsibility: Is there information about
the goals, objectives and operators of the portal?
1.3. Comprehensiveness: Are there infographics / opportunities to
get the basic information within 2 clicks?
Input Availability 2.1. Special conditions: Is there a version for people with
disabilities?
2.2. Mobility: Is there a mobile version / app?
2.3. Alternative: Is there an offline alternative to e-participation?
Output Decision- 3.1. Legislation: Is there a regional legislation regulating this portal?
making 3.2. Reports: Are there reports on activities available?
capacity 3.3. Routing: Are there any markers on this portal allowing to trace
the stage at which the application is?
Feedback Feedback 4.1. Contact: Is there contact information?
Quality 4.2. Evaluation: Are citizens able to evaluate their satisfaction with
this portal or leave a feedback?
4.3. Loop: Can citizens re-apply if they disagree with a decision?
- Special 5.1. - 5.3. Narrow technological functionality questions pertinent to
Requirements types of e-participation portals (e.g. for e-complaints we assess the
availability of the GIS to position complaints, the classifier of
complaints and the "public control"). Full list is in the online annex
(https://clck.ru/MkdWF).
Notes: indicators 2.3, 3.3., 4.2, 4.3. are not applicable to the Open Budget portals, as they do not
allow direct citizens' involvement.
Ongoing Research 75
4. Preliminary Findings
All the results of the evaluation and visualizations are presented in the online annex to this paper
(https://clck.ru/MkdWF). The first general finding about e-participation in Russian regions
corresponds to previous studies (Chugunov & Kabanov 2018), stressing moderate and significantly
disproportionate regional development in terms of e-participation (Table 2): the general score for
the regions ranges from 0 to 83 points.
Table 2: E-Participation Evaluation in the Russian Regions Summary. Source: Authors' Calculation
E-Participation Tool Number of Regions The Average Score
Initiative - participatory budgeting 51 8 / 30
Open Budget 83 12 / 22
E-Petition 12 12 / 30
Crowdsourcing 9 11 / 30
Complaint Mechanisms 30 14 /30
E-Voting 13 12 / 30
The regions can be divided into three types: those with the high (41 points and more), moderate
(21 - 40) and low (20 and fewer) levels of e-participation development. Currently, 17 regions can be
referred to as highly developed in these terms. They have, on average, from 4 to 6 online resources,
which have the following features: (1) a separate open budget portal; (2) a developed initiative /
participatory budgeting portal; (3) a cross-platform solution for e-complaints, e-voting and e-
petitions; (4) more rarely, a separate crowdsourcing portal. Yet, the quantity of facilities does not
necessarily correlate with higher scores, as these facilities can have problems with performance.
Among the moderately developed are 29 regions with approximately 3 online resources, or two
highly developed resources (e.g. open and participatory budget, or open budget and e-complaint).
In less developed regions there is usually one resource (open budget), or two poorly performing
resources. Currently, this group contains 39 regions, which is nearly half of the country (46 per cent).
The ranking also reflects the macro-regional dynamics: while the North-Western regions are among
the leaders, Southern and Caucasian regions are usually among the outsiders.
Figure 1: The Average Score Across E-Participation Tools and Components. Source: Authors' Calculation
76 Ongoing Research
The second finding is the related to the availability of tools (Table 2). The most frequent online
tool is the open budget, that can be found in 83 out of 85 regions. On the contrary, crowdsourcing
platforms are the rarest: only 9 regions had this facility in 2019. Hence it seems that the regional
governments are still keen on developing information portals (like the open budget), prioritizing
passive information acquisition rather than active citizens engagement.
Thirdly, there is also an interesting dynamic across the stages of the process reflected by the four
concepts (fig. 1). While the "black box" (openness) and the "output" (decision-making capacity), on
average, perform well, the "input" (availability), and especially the "feedback", are visibly less
developed: in many cases the features that we relate to them were simply non-existent. Many
regional governments seem to underestimate the importance the problem of digital divide that
hinder the availability of e-participation for all citizens, as well as the feedback quality that ensures
the effectiveness of e-participation.
5. Discussion & Conclusion
In general, this pilot analysis has several methodological and practical outcomes. First, it has proven
the applicability of the framework to the Russian case, and its usefulness to make interregional
comparisons. The preliminary results of the evaluation are in line with the previous findings
(Chugunov & Kabanov 2018; Kabanov & Sungurov 2016) that e-participation in Russia is highly
disproportionate across the regions. At the same time, we have explored such disproportions
further, across the e-participation tools and stages, unveiling important contexts of e-participation
diffusion and development. More effort should be put to estimate all the stages of the policy cycle
(Coehlo 2017). Though the framework was designed to get a nuanced view on Russia, it is based on
the widely accepted system approach and is universal enough. The results obtained can be further
used in both large-N studies and for the deep analysis of cases. Secondly, the framework allows
cross-platform evaluation, which is especially important when one needs to estimate various types
of online tools. Thirdly, the framework has shown its capacity to evaluate e-participation throughout
the whole process, from citizens' demands to the government's response, which is one of the key
values of the proposed methodology.
Of course, there are still some methodological problems to be solved. First, the genuine
effectiveness and impact of e-participation are yet to be further explored: what we can evaluate are
rather online manifestations of the real process, and it is hard to ensure that e-participation does
make a difference. Secondly, it is difficult to evaluate the level of citizens' engagement with the
portals, as these data are usually unavailable. Thirdly, there may usually be problems in access to
the online resources due to the necessity for users to register on them. Finally, the next step should
be the estimation of the internal and external validity of the scores obtained. We are planning to
solve these problems in the forthcoming research. As for the practical results, the survey has allowed
us to explore the regional e-participation in Russia, both in terms facility diversity and performance
discrepancies. Despite the general federal trends towards digital government, we may however see
a variety of outcomes, the reasons for which are yet to be found.
Ongoing Research 77
References
Berntzen, L., & Olsen, M. G. (2009). Benchmarking e-government-a comparative review of three international
benchmarking studies. In 2009 Third International Conference on Digital Society (pp. 77-82). New York:
IEEE. DOI:10.1109/ICDS.2009.55
Bohman, S. (2014). Information technology in eParticipation research: a word frequency analysis.
In Tambouris E., Macintosh A., Bannister F. (Eds.), Electronic Participation. ePart 2014. Lecture Notes in
Computer Science, vol 8654 (pp. 78-89). Berlin, Heidelberg: Springer. DOI: 10.1007/978-3-662-44914-1_7
Busygina, I. (2018). Russian Federalism. In Studin E. (Ed.), Russia (pp. 57-64). London: Palgrave Macmillan.
Chugunov, A. V., & Kabanov, Y. (2018). Evaluating e-participation institutional design. A pilot study of
regional platforms in Russia. In Edelmann N., Parycek P., Misuraca G., Panagiotopoulos P.,
Charalabidis Y., Virkar S. (Eds.) Electronic Participation. ePart 2018. Lecture Notes in Computer
Science, vol 11021. (pp. 13-25). Cham: Springer. DOI: 10.1007/978-3-319-98578-7_2
Coelho, T. R., Cunha, M. A., & Pozzebon, M. (2017). eParticipation and the policy cycle: Designing a research
agenda. In Proceedings of the 18th Annual International Conference on Digital Government
Research (pp. 368-376). DOI: 10.1145/3085228.3085277
Easton, D. (1957). An approach to the analysis of political systems. World politics, 9(3), 383-400.
Elsherif, M., & Azab, N. (2019). A Framework to Measure E-Participation Level of Government Social Media
Accounts. In Proceedings of the 9th International Conference on Information Systems and
Technologies (pp. 1-6). DOI: 10.1145/3361570.3361572
Fedotova, O., Teixeira, L., & Alvelos, H. (2012). E-participation in Portugal: evaluation of government
electronic platforms. Procedia Technology, 5, 152-161. DOI: 10.1016/j.protcy.2012.09.017
Feeney, M. K., & Brown, A. (2017). Are small cities online? Content, ranking, and variation of US municipal
websites. Government Information Quarterly, 34(1), 62-74. DOI: 10.1016/j.giq.2016.10.005
Garcia, A. C. B., Maciel, C., & Pinto, F. B. (2005, August). A quality inspection method to evaluate e-
government sites. In Proceedings of the 4th international conference on Electronic Government,
-209). Berlin, Heidelberg: Springer. DOI: 10.1007/11545156_19
Gulati, J., & Yates, D. J. (2011). Strategy, competition and investment: explaining the global divide in e-
government implementation with policy variables. Electronic Government, An International
Journal, 8(2/3), 124-143.
Kabanov, Y., & Sungurov, A. (2016). E-Government development factors: evidence from the Russian regions.
In Chugunov A., Bolgov R., Kabanov Y., Kampis G., Wimmer M. (Eds.), Digital Transformation and
Global Society, CCIS, vol. 674. (pp. 85-95). Cham: Springer. DOI: 10.1007/978-3-319-49700-6_10
Kassen, M. (2015). Understanding Systems of e-Government: e-Federalism and e-Centralism in the United
States and Kazakhstan. Rowman & Littlefield: Lexington Books, Lanham, MD.
Kubicek, H., & Aichholzer, G. (2016). Closing the evaluation gap in e-participation research and practice.
In Aichholzer G., Kubicek H., Torres L. (Eds.), Evaluating e-Participation. Public Administration and
Information Technology, vol 19 (pp. 11-45). Cham: Springer. DOI: 10.1007/978-3-319-25403-6_2
78 Ongoing Research
Macintosh, A. and Whyte, A. (2008) Towards an evaluation framework for eParticipation. Transforming
Government: People, Process & Policy, 2 (1), pp. 16-30. DOI: 10.1108/17506160810862928/
Panopoulou, E., Tambouris, E., & Tarabanis, K. (2008). A framework for evaluating web sites of public
authorities. In Aslib Proceedings. Emerald Group Publishing Limited. DOI: 10.1108/00012530810908229
Sá, F., Rocha, Á., Gonçalves, J., & Cota, M. P. (2017). Model for the quality of local government online
services. Telematics and Informatics, 34(5), 413-421. DOI: 10.1016/j.tele.2016.09.002
Scherer, S., & Wimmer, M. A. (2011). Reference framework for E-participation projects. In Tambouris E.,
Macintosh A., de Bruijn H. (eds) Electronic Participation. ePart 2011. Lecture Notes in Computer
Science, vol 6847 (pp. 145-156). Berlin, Heidelberg: Springer. DOI: 10.1007/978-3-642-23333-3_13
Sundberg, L. (2018). Shaping up e-Participation Evaluation: A Multi-criteria Analysis. In Edelmann N.,
Parycek P., Misuraca G., Panagiotopoulos P., Charalabidis Y., Virkar S. (Eds.), Electronic Participation.
ePart 2018. Lecture Notes in Computer Science, vol 11021 (pp. 3-12). Cham: Springer. DOI: 10.1007/978-
3-319-98578-7_1
Tambouris, E., Liotas, N., & Tarabanis, K. (2007). A framework for assessing eParticipation projects and
tools. In 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07) (pp. 90-90).
New York: IEEE. DOI: 10.1109/HICSS.2007.13
Vidiasova, L. (2016). The applicability of international techniques for E-participation assessment in the
Russian context. In International Conference on Digital Transformation and Global Society (pp. 145-
154). Cham: Springer.
Vidiasova, L., Tensina, I., & Bershadskaya, E. (2018). Social efficiency of E-participation portals in Russia:
assessment methodology. In Alexandrov D., Boukhanovsky A., Chugunov A., Kabanov Y., Koltsova O.
(eds) Digital Transformation and Global Society. DTGS 2018. Communications in Computer and
Information Science, vol 858 (pp. 51-62). Cham: Springer. DOI: 10.1007/978-3-030-02843-5_5
Xenakis, A., & Loukis, E. (2010). An investigation of the use of structured e-forum for enhancing e-
participation in parliaments. International Journal of Electronic Governance, 3(2), 134-147.
About the Authors
Andrei Chugunov
Andrei Chugunov is the director of e-Government Center at ITMO University. He received PhD in political
science in 2000 and has published more than 100 papers on information society development, e-government,
and e-participation technologies implementation as well as the interdisciplinary research of digitalization.
Yury Kabanov
Yury Kabanov is a Senior Lecturer and a Research Fellow at the Center of Comparative Governance Studies
of the Higher School of Economics (St. Petersburg). Also, he is a Researcher at the e-Government Center at
ITMO University. He has more than 20 academic publications on e-government and e-participation research.
Georgy Panfilov
Georgy Panfilov is a MA student, engineer at the Institute of Design and Urban Studies at the ITMO University
Russia with a bachelor's degree in Political Science.