Information-Processing Machines and the Access-Conscious Recognition of Common Ground Inconsistencies: A Proposal. Maria Di Maro1 , Mohamed Diaoulé Diallo2 , and Francesco Cutugno1 1 University of Naples ‘Federico II’ {maria.dimaro2,cutugno}@unina.it 2 University of Bielefeld mdiallo@techfak.uni-bielefeld.de Abstract. In this paper, we propose a theoretical framework for the recognition of common ground inconsistencies explained through access conscious-like reasoning. Firstly, we present the theoretical background underlying the concept of access consciousness in information-processing systems. Then, we propose an example of “consciously” processing in- formation in machines, such as the adoption of clarification requests ne- gotiating grounded knowledge in human-machine interaction. Keywords: Access-Consciousness · Common Ground · Clarification Re- quests. 1 Introduction This work is aimed at proposing a theoretical-experimental reading key of in- formation processing machines dealing with the accessibility to their internal mental-like information states and which are capable of expressing the pres- ence of grounded knowledge inconsistencies through graph representations. Con- sciousness has been defined differently according to the scholars and disciplines dealing with it. At its simplest, it can be defined as the state of awareness of an internal or external condition or experience. In the field of artificial intel- ligence, many debates focused on the possibility for computational systems to show consciousness. To address this topic in a specific human-machine interac- tion application, we focus, in this work, not on the hard problem of conscious- ness, but on the weak one, represented by the concept of Access Consciousness (A-Consciousness). A-Consciousness is described as the conscious access to a mental state in order to reason about it for rational control of action and speech [2]. In other words, it represents the availability or accessibility of the content of a mental state for verbal reports. It can be distinguished from Phenomenal Conscious- ness (P-Consciousness) which, conversely, is about the subject’s perception of a Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 2 Di Maro M. et al. conscious experience. Although Block’s distinction is commonly accepted, it is important to mention that some other philosophers, such as Lycan [7], identified other possible fine-grained classifications of consciousness, such as the distinc- tion between organism consciousness, control consciousness (similar to Block’s A-Consciousness), consciousness of, state/event consciousness, reportability, in- trospective consciousness, subjective consciousness, and self-consciousness. The process involved in A-Consciousness takes place together with the infor- mation processing one. According to Block, a perceptual state is access-conscious if its content is processed via that information processing function, that is, if its content gets to the Executive System, whereby it can be used to control reasoning and behavior [2]. This Executive System is what can be modelled in order to make some process available and, therefore, a source of reasoning. This information- processing centre is what in our restricted case study will be called Conflict Search Graph, representing one of the possible modules of an Executive System. In this paper, we account for an information-processing system which ap- plies access-conscious-like processes in that it always has access to, or awareness of, its informational internal states and, moreover, produces information rather than just transmitting it, as it is capable of reasoning and acting upon its inter- pretation. In the next sections, we firstly describe what information-processors are, as they represent one way to explain access-consciousness in both biological and virtual systems. Afterwards, we present a specific case study concerning the conscious processing of inconsistencies of grounded information in conversation and how these can result in specific linguistic behaviours in human-human and, likewise, in human-machine interaction, such as the adoption of particular forms of polar questions. 2 Information-processing Systems Any system capable of taking information in one form and processing it into another is referred to as an information-processor. Information can be defined according to the processes that may be involved in the use of information itself. In [13], some of these processes are listed, as follows: ˝ external or internal actions triggered by information, ˝ segmenting, clustering labelling components within a structure (i.e. parsing), ˝ trying to derive new information from old (e.g. what caused this? what else is there? what might happen next? can I benefit from this?), ˝ storing information for future use (and possibly modifying it later), ˝ considering and comparing alternative plans, descriptions or explanations, ˝ interpreting information as instructions and obeying them, e.g. carrying out a plan, ˝ observing the above processes and deriving new information thereby (self-monitoring, self-evaluation, meta-management), ˝ communicating information to others (or to oneself later), ˝ checking information for consistency Information-Processing Machines and Common Ground Inconsistencies 3 Some of the aforementioned processes are here in bold, since they represent some crucial aspects of a specific example of information-processing systems, which we are interested in. Specifically, we can call them User Managed Tasks Applications. These applications require the user to have a leading role and the machine to have a following role. In such situations, the information given by the users is new to the receiving system. This means that such systems take infor- mation as input and store it for future use in a learning perspective, in order to carry out a plan in the future. Such information can also be modified later, for instance when inconsistencies between two pieces of information occur. In fact, specific corrective actions, such as clarification requests (see Section 2.1), can be triggered by inconsistencies in the common ground in order to overcome them, therefore producing new information as in a conscious-like process [8]. We define common ground, following [14], as the mutually recognised shared information in a situation. In [4], four different types of common ground are described - local, personal, communal, specialized. Nevertheless, here we consider only the more general categories of personal and communal common ground [4]. Whereas Communal Common Ground (CCG) is defined as the rule-based shared knowl- edge between individuals belonging to the same community, Personal Common Ground (PCG) refers to the fact-based knowledge built between two interlocu- tors, depending on the structural rules of the CCG. Although a User Managed Task Application does not have knowledge of the desired final state and of the steps to reach it, the system does have a) a general knowledge of which action is possible or not possible (i.e. CCG), b) and can store the given steps in the contextual knowledge (i.e. PCG), where both knowledge structures are modelled in a graph database. When inconsistencies arise because of unobserved pre-conditions and post-conditions of both Common Grounds, adequate linguistic actions can be adopted to solve the problem. This will be better described in the next section. 2.1 Common Ground Inconsistencies The term grounding refers to the acknowledgement of the level of understand- ing of the received information with respect to the complex system of shared or given knowledge [3]. This cognitive-pragmatic process takes place in interaction and makes use of different linguistic and non-linguistic strategies, such as back- channels and clarification requests. Clarification Requests (CRs) are a type of corrective feedback used when a problem in the processing of the previous ut- terance occurs [9]. In table 1, a classification of CRs based on corpus analysis of German and Italian map-tasks is presented [5]. Among the classes, the Information Processing one is listed. This class refers to the situation where the information received is not satisfactory for its entire understanding or the grounded information needs to be stabilised or checked via a confirmation or a control-targeted question. Concerning the PCG, the system might indeed need to complete some received information to be stored in the PCG based on the rules of the CCG by asking specific clarification questions, 4 Di Maro M. et al. Communication Level Problem Trigger Contact Lack of attention Perception Acoustics Lexical Understanding Meaning, Ambiguity Reference Reconstruction NP Reference, Deictic Reference, Action Reference Analytical Ambiguity, Attachment Ambiguity, Syntactic Understanding Understanding Coordination Ambiguity, Elliptical Ambiguity Logical Understanding Cause Effect Information Processing Missing Information, Common Ground Intention Recognition Inference Intention Intention Evaluation Agreeing/Disagreeing, Interest, Incredulity Table 1. Clarification Requests Classification which we call here Missing Information CRs. Commercial and academic sys- tems are already treating this system necessity, for example through slot-filling strategies. On the other hand, when the information needs a double check before its storing in the PCG can actually occur (based on the rules of the CCG), or when the received information clashes with what we have already stored in the PCG, the system might need to use a Common Ground CR. This second class is of our interest, since it perfectly represents what an information-processing systems consciously does when checking for consistency, as mentioned in the previous sections. In the next section, more details concerning the processes and the strategies to adopt will be provided. Questioning Grounded Knowledge: A Conflict Search Graph Studies already shown the importance of exploiting graphs to represent the dialogue information state [15]. The Executive System processing the information in a access-conscious way is, therefore, here represented by what we call the Conflict Search Graph. Our proposal is to have a graph in which the domain information (i.e. part of the CCG) is stored and whose conflict search module can be used to signalise which input does not respect the rules of the CCG and cannot, therefore, become part of the PCG. The graph is going to be built in Neo4J [16] starting from actions in the form of semantic frames, taken from FrameNet [1]. Semantic frames are defined as conceptual structures evoked by action words in the mind of a speaker. Each frame can be linguistically expressed when the action words are syntactically combined with phrases bearing specific semantic and syntactic roles, i.e. the frame elements. In order to represent the rules, which may be useful to link different actions in the graph in a possible instantiation of a course of actions in the dialogue, each frame element can be enriched with additional information, that is i) pre-conditions, such as the initial state of an item before the action is taking place, and ii) post-conditions, such as the final state of an item after the action occurred (see [11, 12] for more information about pre- and post- conditions in the dialogue management for cognitive systems). When the same item is used within different frames with different semantic roles, if the states do not correspond to what expected according to the rules of the CCG, actions cannot be performed. In other words, for each argument of a predicate Information-Processing Machines and Common Ground Inconsistencies 5 evoking a specific instantiated semantic frame (k ), when pre-conditions or post- conditions of the semantic role of that argument (condi ) are compliant with its current state (statei-k ), no conflicts arise and the information can be accepted as part of the PCG. More formally, Dk condi ^ statei-k ñ K (1) If a conflict occurs, it must be corrected or the current action cannot be per- formed. The fact that pre- and post- conditions are explicitly reported in the graph is not only useful to find the conflict, but also to explain why an ac- tion is not possible. This explanation can indirectly be expressed by means of a Common Ground CR. The nature of the conflict expressed by this inconsistency is between a pos- itive original bias of the system towards specific post-conditions of a previous action and pre-conditions of the current action and the negative contextual evi- dence of the current input clashing with that presupposed knowledge. As shown in [5], Common Ground CRs are mostly uttered in the form of polar questions. Furthermore, in [6] is demonstrated that this specific type of conflict between positive bias and negative evidence mostly occurs in the form of high negation polar questions. For this reason, we expect the system to recognise the type of conflict and generate the appropriate question to solve it. 3 Conclusions This work is intended to be a theoretical overview of information-processing ma- chines and a proposal of how to manage the production of linguistic behaviours derived from recognised common ground inconsistencies. As future work, we plan to examine the capability of the system to recognise such informational conflicts within the graph representation and how to express this linguistically. In fact, principles like robustness will be investigated, in that such signals can be ex- ploited to make the human interlocutor aware of the internal state of the system (observability) in order to recover the conflict through dialogue (recoverability). References 1. Baker, C. F., Fillmore, C. J., and Lowe, J. B.: “The berkeley framenet project.” In: 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1 (1998). 2. Block, N. J.: “On a Confusion About the Function of Consciousness.” In: Behavioral and Brain Sciences 18: 227-247 (1995). Reprinted in Block et al. (1997). 3. Clark, H. H.: Using language. Cambridge university press (1996). 4. Clark, E.V.: “Common ground”. In: The Handbook of Language Emergence. Wiley, Chichester, UK (2015): 328–353. 5. Di Maro, M., Buschmeier, H., Kopp, S., Cutugno, F.: “Clarification Requests Nego- tiating Personal Common Ground”. In: Proceeding of XPRAG.it (2020) accepted. 6 Di Maro M. et al. 6. Domaneschi, F., Romero, M., and Braun, B.: “Bias in polar questions: Evidence from English and German production experiments.” In: Glossa: a journal of general linguistics 2.1 (2017). 7. Lycan, W. G.: Consciousness and experience. Mit Press (1996). 8. Marchetti, G.: “Consciousness: a unique way of processing information.” In: Cogni- tive processing 19.3 (2018): 435-464. 9. Purver, M. R. J.: The theory and use of clarification requests in dialogue. Diss. University of London (2004). 10. Putnam, H.: “The nature of mental states”. In: Capitan and Merrill (eds.) Art, Mind and Religion, University of Pittsburgh Press (1967). 11. Romero, O. J., Zhao, R., and Cassell, J.: “Cognitive-Inspired Conversational- Strategy Reasoner for Socially-Aware Agents.” In: IJCAI (2017). 12. Sklar, E. I., and Azhar, M. Q.: “Argumentation-based dialogue games for shared control in human-robot systems.” In: Journal of Human-Robot Interaction 4.3 (2015): 120-148. 13. Sloman, A., and Chrisley, R.: “Virtual machines and consciousness.” In: Journal of consciousness studies 10.4-5 (2003): 133-172. 14. Stalnaker, R.: “Common ground”. In: Linguistics and philosophy 25.5/6 (2002): 701-721. 15. Stoyanchev, S., and Johnston, M.: “Knowledge-Graph Driven Information State Approach to Dialog.” In: Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence (2018). 16. Webber, J.: “A programmatic introduction to neo4j.” In: Proceedings of the 3rd an- nual conference on Systems, programming, and applications: software for humanity (2012).