Which factors are significant for obtaining business intelligence success in the public sector? Rikke Gaardboe Aalborg University, Department of Communication and Psychology, Aalborg, Denmark gaardboe@hum.aau.dk Abstract. The objective of this paper is to present a brief introduction to the doctoral thesis of the author. The content of the thesis identifies the critical success factors for obtaining business intelligence success, measured as use, user satisfaction, net benefits and individual impact from an end user’s perspective in the public sector. The author explores the options regarding how to combine task characteristics with system quality and information quality to obtain a fit between task and technology. The output is the design of a model that depicts the relationship between task compatibility and the perceived individual impact of using business intelligence in the public sector. This PhD bridges a gap in obtaining an understanding of what tasks and quality fit the use of business intelligence. Keywords: business intelligence, public sector, contingency theory, task- technology fit 1 Introduction In the 1980s, a paradigm shift took place in the governance of the public sector. There were budget deficits, and politicians were not willing to increase the tax burden. New Public Management was the answer to that challenge. In the public sector, there was an adoption of governance mechanisms from the private sector. Market mechanisms were on the side of the expulsion of public enterprises and low confidence in bureaucracy. The focus was placed on leadership rather than policy. Public-sector accounting policies were reversed, and the focus was on variable costs rather than fixed costs. Also, outputs and results were highlighted rather than processes. The transformation in the public sector was driven by the desire for streamlining, supported by technological development [1]. The public sector performs many different types of tasks. In Denmark, the public sector is decentralised. Therefore, the decision making and delivery regarding welfare services take place locally. A Danish municipality delivers health care, social services, employment stimulation, labour-market involution, administration and digitalisation, environmental management, HR and staff management, primary schooling and child care [2]. Indeed, Danish municipalities have more than 300 different IT systems to support task management [3]. One way to improve the decision-making and follow-up process is to implement business intelligence in the public sector. The IT system enables multi-dimensional analyses based on different data sources. The purpose is to provide valid information to decision makers. Data is derived from multiple source systems, thus analysing various aspects of the organisation's activities [4]. When Chief information officers (CIOs) is asked to prioritise technology investments, they rank BI first [5]. In a highly competitive world, the quality and accuracy of BI are important factors in the generation of profit or loss [6]. Several articles have emphasised the advantages of using BI. When decisions are based on business analytics, organisations can improve business processes and, thereby, their performance [7, 8]. The ultimate aim is to build shareholder value [9]. However, the success of BI varies across organisations. Obtaining BI success is a complex matter, and that complexity carries a cost [10]. The cost of BI technologies is high because implementation includes infrastructure, software, licenses, training and wages [11]. Furthermore, the literature indicates that a significant number of organisations fail to realise the expected benefits of BI [9, 12–16]. This PhD aims to identify the critical factors for obtaining BI success measured as use, user satisfaction and individual impact from an end-user's perspective. The remainder of the paper is organised as follows: Section 2 outlines a literature review with the current states of the critical success factors (CSF) about BI. In Section 3, I present a preliminary research model based on the literature review presented in Section 2. The method is shown in Section 4; Section 5 concludes the paper. 2 Literature review We conducted a systematic literature review to reveal state of the art for identifying the critical success factors (CSFs) for BI [17]. The literature review focuses on peer-reviewed papers in the period from 2006–2015. We used Papaioannou et al.’s [18] search strategy, which includes databases, reference lists and citations in the search. The inquiry consisted of two parts: one for synonyms of the CSFs and one for BI. Papers were selected first by reading the abstracts, then by reading their full texts. Out of 336 papers and 1,184 references, 29 articles were deemed relevant. We used the framework of IS success to identify the critical success factors and to analyse how researchers identify success in BI. The main findings that motivated our model were: (i) the research on CSFs has focused little on task compatibility as an independent factor in BI success; (ii) as users often have access to the source system and BI, no research has investigated the characteristics of the tasks supported by BI; and (iii) the dominant factor describing BI success is DeLone and McLean’s IS success model [17]. 3 Research model The research model in Figure 1 integrates the IS success model [19] and the task- technology fit model [20]. The first model addresses the factors in obtaining BI success, and the other model investigates the relation between perceived fit and system factors to utilise BI to support their task. Technology Task User Use Individual compatibility satisfaction impact Task Fig. 1. Task compatibility model. 3.1 Technology The construct’s technology consists of two variables, system quality and information quality. In the context, BI is viewed as a tool with which the end user can carry out tasks. With a broader definition, technology refers to hardware, software, data and user support services [20]. In this context, we have limited the technology construct to consist of system quality and information quality. I measure system quality primarily by the end users’ perception of ease of operation [21] and usability [22]. Information quality is measured by the end users’ perception of information. It is consistent representation [21], free of error [21] and reputation [21]. 3.2 Task Tasks are broadly defined as the actions carried out by the end user in turning inputs into outputs [20]. The task characteristics we included were identified by Petter, DeLone and McLean [23] and contain the following variables: task difficulty [24], task specificity [25], task interdependence [24] and task significance [24]. The focus of business intelligence is better decision making. Therefore, under task significance, we have also included the end user's perception of the importance of decision making. 3.3 Task compatibility In information system research, contingency theory is a highly used approach. In general, the contingency theory focuses on the fit between exemplary systems, tasks and performance [26]. If there is a correspondence between the functionality of the BI system and the task characteristics the users need to carry out, information systems have a positive impact on performance. [20]. The task compatibility determinants are the interactions between task and technology. Different types of tasks require different technological support. If there is a gap between the task and the functionality of the information system, users’ satisfaction will be weakened [20]. 3.4 User satisfaction The relation between task compatibility and user satisfaction is supported by various studies [27, 28]. There is strong support for task compatibility being a determinant of user satisfaction [23]. Including the construct of user satisfaction is especially relevant if the researcher measures a specific information system [19]. 3.5 Use In many organisations, one of the objectives is implementing a BI system. Accordingly, it is important that the users utilise the system because it affects an individual’s and/or organisational impact. Decades of research have suggested that there are certain characteristics of individuals that influence the use of an IS [23]. 3.6 Individual impact The construct individual impact is the user's perceived impact of the IS system. An IS is implemented to achieve various objectives for the organisation, with many of these objectives unique to the individual using the system. The individual impact has been measured in a variety of ways, including improvements in productivity, quality of decision making and work practices. 4 Methodology In this type of research, we faced the decision of whether to test the research in a narrowly controlled domain and generalise to a more global setting or visa verse. We decided to focus on the public sector on a more macro level and to include all the end users in an organisation with multiple tasks, types of end users and organisational settings. I included three cases in the study. The common denominator of all three organisations enumerated in the study is that they are part of the public sector in Denmark and they use business intelligence to support decisions. In Denmark, we have three levels of governance: municipalities, regions and the state. The first case is a municipality with about 18,000 employees and 2.1 billion EUR. The second case is a region of approximately 25,000 employees and a budget of 3.5 billion EUR. The last case is an educational institution governed by the state. There are 3,500 employees employed at the institution, with a budget of 0.3 billion EUR. The 3 cases solve different public-sector tasks and use different business intelligence technologies. 4.2 Development of questionnaire The basis for the elaboration of the questionnaire was the literature review, which was briefly presented in Section 2. The foundation is DeLone and McLean’s IS success model, as well as the task characteristics that were identified in the literature review. First, all articles were reviewed which referred to BI success, for which questions were validated. Subsequently, I reviewed all the papers that Petter, DeLone and McLean identified as giving IS success. All the issues were added to a database, with themes, article information and questions. Afterwards, the questions were chosen regarding which were best to measure my constructs in the purpose research model. A draft questionnaire was sent to the case organisations for comment. They all returned the draft with comments to ensure that the questions could also be understood in their organisational context. 4.3 Data collection process All respondents were to answer an online survey. This method was chosen because it is efficient and enabled us to send questionnaires to all end users of BI in the three organisations. To ensure as high a response rate as possible, I made more effort. I tested three survey systems and chose the most user-friendly one. One of the criteria was that users should only have one question at a time, and they should manoeuvre the least possible on screen. Then I formulated the questionnaire in an online survey and typed the questions. To ensure a high level of user friendliness, I used Frogg's principles: time, psychological effort, brain cycles, social deviance and non-routine [29]. Afterwards, I got four end users of BI at different levels to test the survey using a think-aloud test. The test was sufficiently advanced that it was possible to find 75% of all usability problems with a few tests [30]. Based on the trial, I reviewed the online survey and the survey to make it more user-friendly. It was then sent to 100 BI users in a different organisation than those who participated in this survey. The purpose was to see how an online survey worked in practice for users and based on the collected data, I made calculations, among other things, of reliability and validity. The organisations had delivered emails on the end users that were created in the BI system. I sent an email with a presentation of the PhD project and a link to the study. After one week, respondents who had not participated in the survey received a reminder. Another reminder was issued after another week. Then the survey was completed. 4.4 Further process We tested the model with the structural equation modelling technique Partial Least Squares (PLS). The appropriate statistical methodology for testing a model would be a covariance-based SEM (CB-SEM) [31]. Therefore, we used PLS. In the survey there were over 250 participants, there is only a little difference between using PLS and CB-SEM [32]. The first is related to the theoretical relationship between the latent variables, and the latter is related to the ratio of a latent variable and its indicator. Therefore, it can be used for testing the existing relationship and verification of the theory. Before testing the relationships in the PLS-SEM model, I would evaluate the validity and reliability [33]. The outer loading of each variable and Average Variance Extracted (AVE) of each construct measure the convergent validity The recommended threshold value for outer loadings is 0.7 [34]. The AVE values should be above 0.5 in all the variables, which show that the variance of the construct is larger than the error [33]. The composite reliability and Cronbach alpha were calculated to measure the internal consistency reliability. The recommended threshold value is above 0.7[35]. To address the question of discriminant validity, we calculated the Heterotrait-Monotrait Ratio (HTMT). According to Hair et. al. [34], this is a better measure. Based on the quantitative data, I will conduct a focus group interview with representatives of the three public organisations. The purpose is to understand the relationships between the different constructs. The PhD dissertation will be article based. It will consist of a linking text contribution as well as three items. The first article’s literature will be reviewed following an article based on Figure 1 of this paper with quantitative data. The third article will be a mixed-method article, where qualitative and quantitative data will be used. 5 Conclusion The goals set in this thesis are already partially met. The research model has been developed. The questionnaire has been compiled, and data has already been collected in the three public organisations. The ongoing work is to ensure data quality and calculate the model in Figure 1. In relation to data collection, focus group interviews are missing, though a literature review has been published. Articles 2 and three need to be written. There is continuous writing on the linking text contribution as the PhD project is being done. References 1. Hood, C.: The “New Public Management” in the 1980s: Variations on a theme. Accounting, organizations and society. 20, 93–109 (1995). 2. Kommunernes Landsforening: Municipal Responsibilities, http://www.kl.dk/English/Municipal-Responsibilities/. 3. Kombit: Datalandskab i Danske kommuner. Kombit, København (2017). 4. Olszak, C.M., Ziemba, E.: Business intelligence as a key to management of an enterprise. In: Proceedings of Informing Science and IT Education Conference. pp. 855–863 (2003). 5. Gartner: Flipping to Digital Leadership: Insights from the 2015 Gartner CIO Agenda Report. Gartner, Stamford, US (2014). 6. Gonzales, R., Wareham, J., Serida, J.: Measuring the Impact of Data Warehouse and Business Intelligence on Enterprise Performance in Peru: A Developing Country. Journal of Global Information Technology Management. 18, 162–187 (2015). 7. Bronzo, M., de Resende, P.T.V., de Oliveira, M.P.V., McCormack, K.P., de Sousa, P.R., Ferreira, R.L.: Improving performance aligning business analytics with process orientation. International Journal of Information Management. 33, 300–307 (2013). 8. Popovič, A., Turk, T., Jaklič, J.: Conceptual model of business value of business intelligence systems. Management: Journal of Contemporary Management Issues. 15, 5–30 (2010). 9. Dawson, L., Van Belle, J.-P.: Critical success factors for business intelligence in the South African financial services sector. SA Journal of Information Management. 15, (2013). 10. Yeoh, W., Koronios, A.: Critical success factors for business intelligence systems. Journal of computer information systems. 50, 23–32 (2010). 11. Watson, H.J., Haley, B.J.: Data warehousing: A framework and survey of current practices. Journal of Data Warehousing. 2, 10–17 (1997). 12. Chenoweth, T., Corral, K., Demirkan, H.: Seven key interventions for data warehouse success. Communications of the ACM. 49, 114–119 (2006). 13. Hawking, P., Sellitto, C.: Business Intelligence (BI) critical success factors. In: 21st Australian Conference on Information Systems. , Brisbane (2010). 14. Olbrich, S., Poeppelbuss, J., Niehaves, B.: BI Systems Managers’ Perception of Critical Contextual Success Factors: A Delphi Study. In: ICIS 2011 Proceedings. , Shanghai (2011). 15. Riabacke, A., Larsson, A., Danielson, M.: Business intelligence in relation to other information systems. Presented at the December (2014). 16. Xu, H., Hwang, M.I.: A survey of Data warehousing Success Issues. Business Intelligence Journal. 10, 7–13 (2005). 17. Gaardboe, R., Svarre, R.: Critical Success factors for Business Intelligence Success. Presented at the Proceedings of the 25th European Conference on Information Systems. The Association for Information Systems (AIS) (2017). 18. Papaioannou, D., Sutton, A., Carroll, C., Booth, A., Wong, R.: Literature searching for social science systematic reviews: consideration of a range of search techniques: Literature searching for social science systematic reviews. Health Information & Libraries Journal. 27, 114–122 (2009). 19. DeLone, W.H., McLean, E.R.: Information Systems Success: The Quest for the Dependent Variable. Information Systems Research. 3, 60–95 (1992). 20. Goodhue, D.L., Thompson, R.L.: Task-Technology Fit and Individual Performance. MIS Quarterly. 19, 213 (1995). 21. Lee, Y.W., Strong, D.M., Kahn, B.K., Wang, R.Y.: AIMQ: a methodology for information quality assessment. Information & management. 40, 133–146 (2002). 22. Lewis, J.R.: IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use. International Journal of Human-Computer Interaction. 7, 57–78 (1995). 23. Petter, S., DeLone, W., McLean, E.R.: Information Systems Success: The Quest for the Independent Variables. Journal of Management Information Systems. 29, 7–62 (2013). 24. Morgeson, F.P., Humphrey, S.E.: The Work Design Questionnaire (WDQ): Developing and validating a comprehensive measure for assessing job design and the nature of work. Journal of Applied Psychology. 91, 1321–1339 (2006). 25. Daft, R.L., Macintosh, N.B.: A Tentative Exploration into the Amount and Equivocality of Information Processing in Organizational Work Units. Administrative Science Quarterly. 26, 207–224 (1981). 26. Weill, P., Olson, M.H.: An Assessment of the Contingency Theory of Management Information Systems. Journal of Management Information Systems. 6, 59–85 (1989). 27. Jarupathirun, S., Zahedi, F. “Mariam”: Dialectic decision support systems: System design and empirical evaluation. Decision Support Systems. 43, 1553–1570 (2007). 28. Jones, M.C., Beatty, R.C.: User Satisfaction with EDI: An Empirical Investigation. Information Resources Management Journal. 14, 17–26 (2001). 29. Frogg, B.: A Behavior Model for Persuasive Design. In: Proceedings of the 4th International Conference on Persuasive Technology. ACM, New York, NY (2009). 30. Nielsen, J.: Estimating the number of subjects needed for a thinking aloud test. International journal of human-computer studies. 41, 385–387 (1994). 31. Hair, J.F., Ringle, C.M., Sarstedt, M.: PLS-SEM: Indeed a Silver Bullet. The Journal of Marketing Theory and Practice. 19, 139–152 (2011). 32. Ringle, C.M., Sarstedt, M., Straub, D.: A critical look at the use of PLS-SEM in MIS Quarterly. (2012). 33. Fornell, C., Larcker, D.F.: Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of marketing research. 382–388 (1981). 34. Hair, J.F., Hult, G.T.M., Ringle, C., Sarstedt, M. eds: A primer on partial least squares structural equation modelling (PLS-SEM). Sage, Los Angeles (2017). 35. Nunnally, J.C., Bernstein, I.H.: The assessment of reliability. Psychometric theory. 3, 248–292 (1994).