Exploring AI-Enabled Use Cases for Societal Security and Safety Hoang Long Nguyen, Minsung Hong, Rajendra Akerkar∗ Big Data Research Group, Western Norway Research Institute P.O.Box 163, NO-6851 Sogndal, Norway {hln, msh, rak}@vestforsk.no Abstract tive assistance with a human-grade precision (Cath et al. 2018). Also, there is a critical need for automated solutions. Artificial intelligence (AI) represents huge opportunities for us as individuals and for society at large. Recently global mo- For these reasons, targeted applications of AI to the domain mentum around AI for social good is growing. AI opens a of security and safety have recently come into concentra- new perspective to maintain public security and safety by pro- tion. This paper portrays AI-enabled use cases, which can viding investigative assistance with a human-grade precision. be considered as opportunities to come up with pragmatic Quantitative methods might always not be a correct evalua- tools and solutions for helping address some current press- tion of the AI techniques due to the characteristics of the so- ing challenges. cietal security domain. Therefore, we also need qualitative re- We introduced the background and emphasised our mo- search methods in relevant use cases. The paper presents two tivation in this section. The rest of this paper encompasses AI-enabled use cases on information validation and surveil- the following structure. In the next section, the necessary re- lance enhancement with the support of AI algorithms. search methodologies will be given. Further, we will pro- vide use cases on information validation and surveillance Introduction enhancement with the support of AI. Finally, we will draw Researches in the field of societal security and safety are essential conclusions and state future directions in the last related to critical events that can cause a threat to our life, section. health, and other fundamental values (Kang 2016). Even though security and safety terminologies seem different, Research Methodology the management of both types of circumstances is based on the same concepts, which are: i) discovering underlying Pressing Issues and Challenges events, ii) applying efficient procedures and plans to miti- Several issues, which are not previously placed in the cen- gate threats and to keep people and values safe from harm or tral, have now become the main focus. Examples involve injury, and iii) managing crisis and recovering from it. This a rising number of disinformation and insecurity incidents. research topic brings challenges for either cross-sectoral and They are becoming concurrently a premise for, and a threat thematic researchers and practitioners (Olsen, Kruke, and to, societal security and safety. Hovden 2007). Disinformation: Disinformation (i.e., false or mislead- As digitalisation continues to elaborate and expand in ev- ing information) is generally not an emerging phenomenon; ery area, the risks and threats facing society are evolving, however, with the popularity of online platforms, it has be- and even more complicated, on a large scale. The advantages come an increasingly sophisticated, deliberately circulated, and convenience of digital are quick and straightforward, and regularly utilised tool to achieve hostile targets and to which are vital aspects to adapt to the modern appetite for cause harm. The spreading of disinformation poses an essen- real-time processing. Therefore, various spaces (e.g., email, tial threat to societies and has adverse impacts on the quality SMS messaging, e-commerce, social networking service, of public life, stability, and societal security. For example, and smart systems) can be targeted and intercepted by savvy the outbreak of disinformation regarding COVID-19 has dis- hackers. These issues can significantly reduce our trust and seminated rapidly and widely across social networking ser- increase insecurity to the same extent. It is precisely this ur- vices (Apuke and Omar 2020), endangering safety and im- gency that requires a practical approach. peding the recovery. Further, we are currently stepping into Artificial Intelligence (AI) opens a new perspective to even more dedicated fake news. Not only text but also audio, maintain public security and safety by providing investiga- photo, and video can be controlled and manipulated at will. ∗ Corresponding author In only 3.7 seconds, an algorithm named Deep Voice utilise AAAI Fall 2020 Symposium on AI for Social Good. snippets of voices to mimic the original one in order to cre- Copyright c 2020 for this paper by its authors. Use permitted un- ate new speech, accents, and tones (Cole 2018). Augmented der Creative Commons License Attribution 4.0 International (CC Reality (AR) and Virtual Reality (VR) will be the next-gen BY 4.0). targets for disinformation with the upper realm of complex- ity and severe significance. There is a considerable number of initiatives aimed at countering disinformation worldwide. According to the latest figures published by the Duke Re- porters’ Lab, there are 188 factchecking projects active in more than 50 countries. Popular platforms (e.g., Facebook, Twitter, and YouTube) are concentrating on tackling online disinformation and limiting its circulation. Nevertheless, we still need to deepen our comprehending of the dangers of fake news and disinformation for well-informed and prag- matic societal security and safety planning. Until now, there are several barriers to the utilisation of automated techniques to detect and counter disinformation. The first significant Figure 1: Action research cycle (self-reflective spiral (Mc- shortcoming is the risk of over-blocking lawful and accu- Taggart and Kemmis 1988)). rate content – the overinclusiveness characteristics of AI. The technology is still under development, and AI models are still prone to false negatives or positives – i.e., recognis- computer engineering. The four steps of the action research ing content and bot accounts as fake when they are not. False cycle, which are depicted in Figure 1, are explained as fol- positives can negatively impact freedom of expression and lows. lead to censorship of legitimate and trustworthy content that is machine-labelled mistakenly as disinformation. Further- • Plan: include problem definition, situation analysis, team more, AI systems have yet to master basic human concepts vision, and strategic plan. like sarcasm and irony, and cannot address more nuanced • Action: involve the implementation of the strategic plan. forms of disinformation. Linguistic barriers and country spe- cific cultural and political environments further add to this • Observation: encompass monitoring and evaluation. challenge. It is therefore necessary to research and develop • Reflection: on the results of the evaluation. advanced AI models that are able to identify fake news ef- Quantitative methods might always not be a correct evalua- fectively and automatically. tion of the AI techniques due to the characteristics of the so- Insecurity: The problem of insecurity and the feeling of cietal security and safety domain. Therefore, We also need insecurity are demanding immediate actions and efficient qualitative research methods in relevant use cases. For ex- solutions. For this reason, a mass amount of cameras can ample, data collection for the study purpose is done by con- be seen everywhere (e.g., on the streets and in businesses) ducting interviews with organisation stakeholders, captivat- in large cities. Law enforcement can place reliance on this ing opinions from industry experts, referring to existing lit- footage to investigate crimes after the fact for prosecuting the erature, using principal consultants as a secondary source of guilty and catching criminals. Although surveillance cam- information on initiatives adapted in similar organisations eras are inexpensive, the workforce necessary to keep track elsewhere to about with the trends. The study also analy- of and analyse them is expensive; hence, usually, videos ses survey data available for stakeholders, relevant organi- from these cameras are only referred after critical events sations, general observation, and end-users (including citi- are known to have taken place. We find it unrealistic and zens) and an independent survey from IT professionals on infeasible for human observers to monitor and examine all the AI field. In the following section, two AI-enabled use the video streams with high accuracy. By leveraging AI- cases for societal security and safety are described as pre- powered surveillance technologies, we enable the capacity liminary studies based on literature review. to seek through more video more efficiently, to comprehend the full value of video surveillance, and to achieve expected results automatically while requiring less human interven- AI-enables Use Cases tion for video investigation. Through use cases, we aim at investigating methodologi- cal, societal, technological issues, which in turn contribute Societal AI Research Cycle to benefit from AI-based technologies, frameworks, and ser- This section introduces the action research cycle proposed vices. by (McTaggart and Kemmis 1988) We follow the applied research method, which means the application of AI tech- Information Validation niques into practice to address the risky situation of societal Diverse thoughts and opinions (Long, Nghia, and Vuong security and safety, conducted to solve real problems (i.e., 2014) are valued in modern society. Often it is called “cog- use cases). nitive diversity" and can counter group-think and enables According to (Kemmis, McTaggart, and Nixon 2013), ac- better decision-making (Carey et al. 2016). Ironically, the tion research is rarely as neat as this spiral of self-contained cognitive diversity of a population is also being exploited in cycles of planning, acting and observing, and reflecting sug- an entirely different way today. Instead of consolidating dif- gests in reality. Therefore, the process is likely to be more ferent perspectives and world-views into a superior consen- fluid, open, and responsive. In this regard, we repeat each sus, new information technologies, such as online boutique cycle in a short period by following the agile methodology in news, social networks, and microblogs, take advantage of cognitive diversity by isolating subpopulations and catering locate unusual objects. The recognition can be categorised to their idiosyncratic opinions. It often leads to giving peo- at either characteristic-based (Marcialis and Roli 2003) or ple the illusion that they are in the ideological majority (Cy- behaviour-based (Robertson, Reid, and Brady 2008) level. benko and Cybenko 2018). As such, cognitive diversity can Furthermore, we can train the AI models to determine poten- be regarded as the Petri dish in which “fake news" thrives tially dangerous objects such as sharp objects, glass items, (Carey et al. 2016). Consequently, it is typically challenging and weapons. to judge and accept the information that contradicts some- one’s prior beliefs and world-views as truthful (Cybenko, At the characteristic-based level, the analysis can be con- Giani, and Thompson 2002). ducted by leveraging either face, head (Ishii et al. 2004), or As AI’s role in defeating cognitive safeguards, people now body features. Given a single query video, or images ex- have a broader choice of information sources that they can tracted from this video, AI allows searching for the occur- self-select to align with whatever niche beliefs they may al- rence of a specific person. This gives us an opportunity to ready have (Carey et al. 2016). It creates audiences with sim- trace and discover his suspicious behaviours. In addition to ilar, idiosyncratic beliefs, and they can be identified and la- that, we can estimate his gender, age (Antipov et al. 2017), belled using AI-based natural language and social network and emotion (Jain, Shamsolmoali, and Sehdev 2019) as techniques (Hemavathi, Kavitha, and Ahmed 2017). After well. Apart from previous applications, AI-enabled surveil- the audience identification, the content in the information lance enhancement still has other uses. By examining street can be adjusted to that audience. While human reporters and footage, AI can determine vehicles concerning a set of at- writers populate mainstream news and information sources, tributes. For example, we can know exactly how many blue it is now possible to robotically generate news stories us- bus that passed through a specified location in a particular ing AI-based software (WashPostPR 2016). Combining such period. Where this becomes more helpful is when we want technologies, we can imagine near-future AI-powered sys- to find a stolen vehicle, and require a result promptly. tems that will write a news article with minimal or no human False alarms are the avowed enemy of efficient video intervention (Cybenko and Cybenko 2018). Besides, users monitoring. Working with a video surveillance system that self-select their sources and tend to see content consistent often raises ‘false positives’ means constantly being deluged with their beliefs. And they then gain trust (Nguyen et al. with needless alerts. This predictably clutters the opinion of 2017) in those sources. Once such community sources have security operators, making it difficult to efficiently monitor been identified, AI technologies can author professional- the area in question and leading to the waste of huge man- looking websites with minimal human effort, catering to ide- hours. AI technologies employed to security, enabling video ological niches (Tselentis 2017). Techniques for classifying surveillance systems to ‘learn’ what a possible danger may news as “real” vs “fake” (or rumours vs non-rumours) gener- look like. This enhances accuracy, driving more accurate de- ally fall into two categories. One class of methods uses lin- tection and preventing flagging events related to natural con- guistic and semantic analysis of written content to discrimi- ditions such as local wildlife. nate while the other uses dissemination patterns and rates to classify different types of news. Some approaches use both We aim at by analysing and detecting abnormal human ac- of them (Subrahmanian et al. 2016; Kwon, Cha, and Jung tions at the behaviour-based level. The target is to anticipate 2017). whether a harmful event can occur because of an unusual be- Because the scale and scope of fake news claims will haviour (Ko and Sim 2018), even a few minutes in advance; probably make human-based assessments about the verac- for example, detecting abnormal driving (Huang et al. 2019) ity of information unsustainable (Alvarez 2018), identifying can help prevent an accident. The selection of techniques is wrong information like “fake news" is a significant potential influenced by two types of scene density that are un-crowded application of AI. (i.e., single or a small number of people) and crowded. In un-crowded scenes, falling (for older adults), loitering (stay- Surveillance Enhancement ing in a public location without apparent purpose for a long period), and violent actions (e.g., chasing and fighting) are Among the range of technologies available to security per- useful to detect. On the other hand, it isn’t easy to moni- sonnel, video surveillance is a common tool, made infinitely tor and analyse the behaviour of each person separately in more effective with the addition of good video analytics and crowded scenes. Possible approaches are crowd density es- now refined by artificial intelligence and machine learning. timation (i.e., assessing a crowd status), crowd motion de- By making smart use of these technologies, public author- tection (i.e., identifying behaviour pattern in a group), and ities can not only enhance protection but also improve the crowd tracking (i.e., deriving trajectories of the movements). optimisation of their resources and, by applying it to areas beyond public security. Intelligent video surveillance based The powerful representation capacity of deep learning has on AI is beneficial for monitoring of physical assets, large made it inevitable for the intelligent surveillance enhance- spaces, or significant events, for example, open-air concerts ment research community to employ its potential. Currently, or film festivals. Since it is challenging to be in various deep learning algorithms and models (Zhou et al. 2016; places at once, we can rely on AI to detect violence or to Pérez-Hernández et al. 2020) are demonstrating their effec- analyse crowd behaviour for sending alerts if something is tiveness in a large crowd at all crisis-related conditions, even behaving abnormally. Beginning with a targeted video, we in real-time (Pennisi, Bloisi, and Iocchi 2016; Nawaratne can apply object detection and identification to discover and et al. 2019). Conclusion voice-software-can-clone-anyones-voice-with-just-37- As digitalisation continues to elaborate and expand in the seconds-of-audio, accessed on 17.09.2020. humanitarian domain, the risks and threats facing society are Cybenko, A. K.; and Cybenko, G. 2018. AI and fake news. evolving, and even more complicated, on a large scale. In IEEE Intelligent Systems 33(5): 1–5. this paper we have illustrated AI-enabled use cases, which can be considered as opportunities, to come up with prag- Cybenko, G.; Giani, A.; and Thompson, P. 2002. Cognitive matic tools, solutions, and service for addressing some cur- hacking: A battle for the mind. Computer 35(8): 50–56. rent issues. Hemavathi, D.; Kavitha, M.; and Ahmed, N. B. 2017. In- Besides, several challenges are needed to be taken into formation extraction from social media: Clustering and la- account. Human-level is the most important challenge in AI. belling microblogs. In Proceedings of the 2017 Interna- We can develop a model with 80-90% accuracy; nonethe- tional Conference on IoT and Application (ICIOT), Naga- less, humans can achieve even absolute precision in all afore- pattinam, India, 19-20 May 2017, 1–10. IEEE. mentioned use cases. Therefore, it is necessary to balance and keep humans on edge for AI systems and services. To Huang, W.; Liu, X.; Luo, M.; Zhang, P.; Wang, W.; and achieve positive impact, AI systems and solutions need to Wang, J. 2019. Video-based abnormal driving behavior de- adhere to ethical principles. To ensure that the impacts of AI tection via deep learning fusions. IEEE Access 7: 64571– systems remain positive and constructive, it is essential that 64582. we build in certain standards and safeguards. Data privacy Ishii, Y.; Hongo, H.; Yamamoto, K.; and Niwa, Y. 2004. is another critical challenge since AI-based algorithms learn Face and head detection for a real-time surveillance sys- from and make predictions based on data; many of them are tem. In Proceedings of the 17th International Conference personal and sensitive. This data can be in the target of bad on Pattern Recognition (ICPR), Cambridge, UK, 26-26 Au- purposes or of unlawful intents. Hence, we need to consider gust 2004, 298–301. IEEE. if or how to address the use of personal information in AI systems. We also need to seek appropriate methodologies to Jain, D. K.; Shamsolmoali, P.; and Sehdev, P. 2019. Ex- guarantee the protection of data while retaining the signifi- tended deep neural network for facial emotion recognition. cant and potential benefits of big data analytics. Pattern Recognition Letters 120: 69–74. Kang, H.-J. 2016. A Study on Analysis of Intelligent Video Acknowledgments Surveillance Systems for Societal Security. Journal of Dig- This work is supported by the INTPART BDEM project ital Contents Society 17(4): 273–278. (grant no. 261685/H30) funded by the Research Council of Kemmis, S.; McTaggart, R.; and Nixon, R. 2013. The ac- Norway (RCN) and the Norwegian Agency for International tion research planner: Doing critical participatory action Cooperation and Quality Enhancement in Higher Education research. Singapore: Springer Science & Business Media. (Diku). Ko, K.-E.; and Sim, K.-B. 2018. Deep convolutional frame- work for abnormal behavior detection in a smart surveillance References system. Engineering Applications of Artificial Intelligence Alvarez, E. 2018. Facebook’s approach to fighting fake news 67: 226–234. is half-hearted. https://www.engadget.com/2018/07/13/ facebook-fake-news-half-hearted, accessed on 17.09.2020. Kwon, S.; Cha, M.; and Jung, K. 2017. Rumor detection over varying time windows. PloS One 12(1): e0168344. Antipov, G.; Baccouche, M.; Berrani, S.-A.; and Dugelay, J.- L. 2017. Effective training of convolutional neural networks Long, N. H.; Nghia, P. H. T.; and Vuong, N. M. 2014. Opin- for face-based gender and age prediction. Pattern Recogni- ion spam recognition method for online reviews using onto- tion 72: 15–26. logical features. Tạp chí Khoa học (61): 44–59. Apuke, O. D.; and Omar, B. 2020. Fake news and COVID- Marcialis, G. L.; and Roli, F. 2003. Fusion of face recog- 19: modelling the predictors of fake news sharing among nition algorithms for video-based surveillance systems. In social media users. Telematics and Informatics 101475. Foresti, G. L.; Regazzoni, C. S.; and Varshney, P. K., eds., Multisensor surveillance systems: the fusion perspective, Carey, J. M.; Nyhan, B.; Valentino, B.; and Liu, M. 2016. An 235–249. Boston, MA, USA: Springer. inflated view of the facts? How preferences and predisposi- tions shape conspiracy beliefs about the Deflategate scandal. McTaggart, R.; and Kemmis, S. 1988. The action research Research & Politics 3(3): 1–9. planner. Melbourne, Victoria, Australia: Deakin university. Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; and Nawaratne, R.; Alahakoon, D.; De Silva, D.; and Yu, X. Floridi, L. 2018. Artificial intelligence and the ‘good so- 2019. Spatiotemporal anomaly detection using deep learn- ciety’: the US, EU, and UK approach. Science and Engi- ing for real-time video surveillance. IEEE Transactions on neering Ethics 24(2): 505–528. Industrial Informatics 16(1): 393–402. Cole, S. 2018. Deep Voice Software Can Clone Nguyen, H. L.; Lee, O.-J.; Jung, J. E.; Park, J.; Um, T.-W.; Anyone’s Voice With Just 3.7 Seconds of Audio. and Lee, H.-W. 2017. Event-driven trust refreshment on am- https://www.vice.com/en_us/article/3k7mgn/baidu-deep- bient services. IEEE Access 5: 4664–4670. Olsen, O. E.; Kruke, B. I.; and Hovden, J. 2007. Societal safety: Concept, borders and dilemmas. Journal of contin- gencies and crisis management 15(2): 69–79. Pennisi, A.; Bloisi, D. D.; and Iocchi, L. 2016. Online real- time crowd behavior detection in video sequences. Com- puter Vision and Image Understanding 144: 166–176. Pérez-Hernández, F.; Tabik, S.; Lamas, A.; Olmos, R.; Fu- jita, H.; and Herrera, F. 2020. Object detection binary clas- sifiers methodology based on deep learning to identify small objects handled similarly: Application in video surveillance. Knowledge-Based Systems 194: 105590. Robertson, N.; Reid, I.; and Brady, M. 2008. Automatic hu- man behaviour recognition and explanation for CCTV video surveillance. Security Journal 21(3): 173–188. Subrahmanian, V.; Azaria, A.; Durst, S.; Kagan, V.; Gal- styan, A.; Lerman, K.; Zhu, L.; Ferrara, E.; Flammini, A.; and Menczer, F. 2016. The DARPA Twitter bot challenge. Computer 49(6): 38–46. Tselentis, J. 2017. When websites design themselves. https: //www.wired.com/story/when-websites-design-themselves, accessed on 17.09.2020. WashPostPR. 2016. The Washington Post experiments with automated storytelling to help power 2016 Rio Olympics coverage. https://www.washingtonpost.com/pr/wp/2016/ 08/05/the-washington-post-experiments-with-automated- storytelling-to-help-power-2016-rio-olympics-coverage, accessed on 17.09.2020. Zhou, S.; Shen, W.; Zeng, D.; Fang, M.; Wei, Y.; and Zhang, Z. 2016. Spatial–temporal convolutional neural networks for anomaly detection and localization in crowded scenes. Signal Processing: Image Communication 47: 358–368.