<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Revisit to the Incorporation of Contextawareness in Affective Computing Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Aggeliki</forename><surname>Vlachostergiou</surname></persName>
							<email>aggelikivl@image.ntua.gr</email>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Image Video and Multimedia Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
								<address>
									<addrLine>Iroon Polytexneiou 9</addrLine>
									<postCode>15780</postCode>
									<settlement>Zografou</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stefanos</forename><surname>Kollias</surname></persName>
							<email>stefanos@cs.ntua.gr</email>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Image Video and Multimedia Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
								<address>
									<addrLine>Iroon Polytexneiou 9</addrLine>
									<postCode>15780</postCode>
									<settlement>Zografou</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Revisit to the Incorporation of Contextawareness in Affective Computing Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">123E603A6059E01C9D698E85B45FA053</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:18+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>HCI</term>
					<term>Affective Computing</term>
					<term>Context</term>
					<term>Context modeling</term>
					<term>Contextaware systems</term>
					<term>SEMAINE</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The research field of Human Computer Interaction (HCI) is moving towards more natural, sensitive and intelligent paradigms. Especially in domains such as Affective Computing (AC), incorporating interaction context has been identified as one of the most important requirements towards this concept. Nevertheless, research on modeling and utilizing context in affect-aware systems is quite scarce. In this paper, we revisit the definition of contexts in AC systems and propose a context incorporation framework based on semantic concept detection, extraction enrichment and representation in cognitive estimation, to further clarify and assist interpretation of contextual effects in Affective Computing systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Affective Computing (AC) systems have been well developed in the past decades as an effective solution for recognizing, interpreting, processing, and simulating human affects. Additionally, context-aware Affective Computing systems <ref type="bibr" target="#b1">[2]</ref> emerged as a novel way of shifting from context-aware Affective Computing systems to affective aware intelligent Human Computer Interaction systems to overcome the fact that contextual information can not be discounted in doing automatic analysis of human affective behavior. Thus, the fundamental assumption of context-aware Affective Computing systems is that context is able of shaping how people interpret the high and complex expressions of people and machines. The variation of these expressions in human behavior arise not only from a subject's internal psychological or cognitive state but also from other subjects or the environment. For instance, frowning behavior may be an indicator of anger or it may be due to concentration depending on the contextual interactional setting. Context-based affect aware analysis needs to clarify the preliminary selection of contextual variables in order to further assist intepretation of contextual effects in Affective Computing systems. As a result, how to incorporate contexts into Affective Computing systems is always a research question in this domain. However, prior to that, which variables should be considered as contexts is still under question.</p><p>Currently, several context-aware Affective Computing systems <ref type="bibr" target="#b4">[5]</ref> have been developed but very few research went back to discuss the definition of contexts. And researchers simply blend location, identities of people around the user, environment, social interaction etc. and other variables together to consider them as contexts, which further creates the confusion on which should be the first decisions to be made prior to creating context-aware automatic affect analysis systems. The definition and exploration of context are not only related to the selection of contextual variables in Affective Computing systems, but are also relevant to the interpretation of contextual effects based on the outcomes or findings in the experiments. It is obvious that the academic area focuses more on the development of effective context-aware Affective Computing systems, but ignores the identification of contexts and interpretation of contextual effects in recent years.</p><p>This paper is organized as follows: Section 2 provides an overview of how the term context has been studied so far in various disciplines. Section 3 presents an overview of my research and the preliminary findings in my previous work. Section 4 discusses my plan for achieving my overall objective for the remainder of my doctoral work and finally, Section 5 concludes by summarizing the impact and relevance of the proposed approaches to the field of context-aware Affective Computing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>Actually contexts have been studied in various disciplines, such as ubiquitous computing, contextual advertising, social signal processing, HCI, gaming, mental health etc., where the definition differs resulting in a different understanding of contexts among those areas. In context-aware Affective Computing systems, the earliest research papers <ref type="bibr" target="#b4">[5]</ref> may bring us to look back upon almost ten years ago; however, the field has yet to agree on the definition of context. Several researchers simply blend verbal content (semantics), knowledge of the general interactional setting, discourse and social situations and other variables together and consider all of them as contexts, which further creates confusion on which should be the most appropriate contextual parameter w.r.t context-aware Affective Computing systems (Figure <ref type="figure" target="#fig_0">1</ref>).</p><p>The most commonly used definition is the one given by Abowd et al. in 1999 <ref type="bibr" target="#b0">[1]</ref>, "context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves." This definition hardly limits the set of variables that can be taken into account, and it is still ambiguous without clear guidelines to select appropriate variables in AC systems. Apparently, the definition and selection of contexts is a domain-specific problem, where classification of contextual variables is a typical way to put different variables in categories but it is still not general enough and not flexible in interpreting the contextual effects. The debate or discussion may be finally ended by the idea proposed by Duric et al. <ref type="bibr" target="#b2">[3]</ref>, which is known as W5+ formalization and incorporates context corresponing to the answering of the following important aspects of context: Who you are with (e.g. dyadic/multiparty interactions), What is communicated (e.g., (non)-linguistic message), How the information is communicated (the persons affective cues), Why, i.e., in which context the information is passed on, Where the user is, What his current task is, How he/she feels (has his mood been polarized changing from negative to positive?) and which (re)action should be taken to satisfy humans needs).</p><p>Unfortunately, so far the efforts on human affective behavior understanding are usually context independent <ref type="bibr" target="#b5">[6]</ref>. In light of these observations, understanding the process of a natural progression of context-related questions when people interact in a social environment could provide new insights into the mechanisms of their interaction context and affectivity. The Who, What, Where context-related questions have been mainly answered either separately or in groups of two or three using the information extracted from multimodal input streams <ref type="bibr" target="#b5">[6]</ref>. Thus, as of date, no general W5+ formalization exist, due to the fact that current systems which answer to most of the W questions are founded on different psychological theories of emotion. Recent research on progressing to the questions of "Why" and "How" has led to the emerging field of sentiment analysis, through mining opinions and sentiments from natural language, which involves a deep understanding of semantic rules proper of a language.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Preliminary Work</head><p>The key objective of my PhD research is to computationally identify, automatically extract and incorporate contextual information into affect aware recognition frameworks, with the aim of identifying context-aware emotional-specific patterns.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Context identification Framework</head><p>To fulfill the need of understanding whether and how context is incorporated in automatic analysis of human affective behavior, we propose a novel contextaware incorporation framework (Fig. <ref type="figure" target="#fig_1">2</ref>) which (i): includes detection and extraction of semantic context concepts, (ii): better enriches a number of Psychological Foundations with sentiment values and (iii): enhances emotional models with context information and context concept representation in appraisal estimation <ref type="bibr" target="#b10">[11]</ref>.</p><p>As a first step, our preliminary results are focused on bridging the gap at concept level by exploiting semantics, cognitive and affective information, associated with the image verbal content (semantics), which for the needs of our research is the contextual interactional information between the user and the operator of the SEMAINE database <ref type="bibr" target="#b7">[8]</ref>, keeping fixed the "Where" context-related question. This context concept-based annotation method, that we are examining, allows the system to go beyond a mere syntactic analysis of the semantics associated with fixed window sizes <ref type="foot" target="#foot_0">1</ref> . In most of traditional annotation methods, emotions and contextual information are not always inferred by appraisals and thus contextual information about the causes of that emotion is not taken into account <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Approach</head><p>In this section, our preliminary results of our proposed semantic context concept extractor, are described in details in <ref type="bibr" target="#b10">[11]</ref>, and their application to indicative examples validate our proposed approach are presented: A. Data Corpus: The model here is confronted with the SEMAINE corpus <ref type="bibr" target="#b7">[8]</ref>. This audiovisual corpus comprises manually-transcribed sessions with natural emotional displays. These sessions are recordered from two individuals, an operator and a user, interacting through teleprompter screens from two different rooms. The emotions were elicited with the sensitive artificial listener (SAL) framework, where the operator assumes four personalities aiming to elicit positive and negative emotional reactions from the user. Agent's utterances are constrained by a script, however, some deviations to the script occur in the database. B. Pre-Processing: The pre-processing submodule firstly interprets all the affective valence indicators usually contained in the verbal content of transcriptions, such as special punctuation, complete upper-case words, exclamation words and negations. Handling negation is an important concern in such scenario, as it can reverse the meaning of the examined sentence. Secondly, it converts text to lower-case and, after lemmatizing it, splits the sentence into single clauses according to grammatical conjunctions and punctuation.</p><p>These n-grams are not used blindly as fixed word patterns but are exploited as reference for the module, in order to extract multiple-word concepts from information-rich sentences. So, differently from other shallow parsers, the module can recognize complex concepts also when irregular verbs are used or when these are interspersed with adjectives and adverbs, for example, the concept "buy easter present" in the sentence "I bought a lot of very nice Easter presents". C. Semantic context concept parser: The aim of the semantic parser is to break sentences into clauses and, hence, deconstruct such clauses into concepts. This deconstruction uses lexicons which are based on sequences of lexemes that represent multiple-word concepts extracted from ConceptNet, WordNet <ref type="bibr" target="#b8">[9]</ref> and other linguistic resources.</p><p>Under this view, the Stanford Parser 2 has been used according to Python NLTK 3 ; a general assumption during clause separation is that, if a piece of text contains a preposition or subordinating conjunction, the words preceding this function or are interpreted not as events but as objects. Secondly, dependency structure elements are processed by means of Stanford Lemmatizer for each sentence. Each potential noun chunk associated with individual verb chunks is paired with the stemmed verb in order to detect multi-word expressions of the form "verb plus object". The pos-based bigram algorithm extracts concepts, but in order to capture event concepts, matches between the object concepts and the normalized verb chunks are searched. It is important to build the dependency tree before lemmatization as swapping the two steps result in several imprecisions caused by the lower grammatical accuracy of lemmatized sentences. Each verb and its associated noun phrase are considered in turn, and of more concepts is extracted from these. D. Opinion and Sentiment Lexicon: Current approaches to concept-level sentiment analysis mainly leverage on existing affective knowledge bases such as ANEW, WordNet-Affect and SentiWordNet <ref type="bibr" target="#b3">[4]</ref>. However, for the needs of our current work, we use the SentiWordNet, which is a concept-level opinion lexicon and contains multi-word expressions labeled by their polarity scores. <ref type="foot" target="#foot_3">4</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Future Research</head><p>My future work will focus on the integration of previous preliminary findings and insights on context affect aware methodologies w.r.t. the production, intepretation and analysis level respectively. To achieve this goal, we have to deal with the following specific challenges: 1) Extraction and Representation: How can we extract and computationally measure contextually rich features w.r.t. the verbal content? 2) Learning: What are the proper learning methods needed to build models? 3) Incorporation: At what time unit should we make an emotion inference (e.g. at the frame or utterance level) and how should we measure the performance of our system?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Extraction and Representation</head><p>In Affective Computing, all of the state-of-the-art algorithms perform well on individual sentences without considering any context information, but their accuracy is dramatically lowered because they fail to consider context and the syntactic structure of the verbal content (transcriptions) at the same time. Based on the assumption that the context around a sentence or pair of sentences also plays an important role in determining sentiment, I plan to employ in my experiments a conditional random field (CRF) model <ref type="bibr" target="#b6">[7]</ref> to capture syntactic, structural and contextual features of sentences. To this aim, I propose to utilize the SEMAINE and RECOLA datasets <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b9">10]</ref> for both human-agent and human-human social and situational interactions<ref type="foot" target="#foot_4">5</ref> , as they provide continuous-time dimensional labels (valence-activation dimensional space).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Learning</head><p>In the learning stage, I plan to use discriminative learning methods and learn patterns of contextual emotion-specific segments in a supervised manner. My hypotesis is that training contextual rich classification systems using segments lacking clarity may lead to lower analysis rates, and using only segments with high clarity will lead to improved performance as well as more efficient training due to the decrease in training data. I will first extract contextual rich audiovisual descriptors of the above mentioned emotional corpora <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b9">10]</ref> and then I will utilize these "socially contextually rich" segments with consistent emotional cues and their emotional labels to train a discriminative model. In the case of the RECOLA dataset, in which only the first 5 minutes have been annotated, I will use the active learning method to build classifiers using less tarining data for expanding labeled data pool.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Incorporation</head><p>Finally, the goal at the interpretation stage is to show that incorporating context can both improve the system's performance and disambiguate multimodal behaviors. To this aim, I propose to first identify "socially contextually rich" segments of the test data and incorporate emotion at the segment level using the learned weights of discriminative models. I plan to conduct both classification and regression tasks. I hypothesize that the classification of the "socially contextually rich" segments can increase the regression of all the data, since the captured and longer-range information may be more useful. I will predict continuouas-valued labels in dimensional space using regression models such as Support Vector Regression. Also, I will cluster and define new emotion classes from the dimensional labels and identify classes using Support Vector Machines (SVM). The SVM outputs will be combined to infer dimensional space outputs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this paper, we point out the motivation and importance of identifying or defining the incorporation of context in Affective Computing systems, especially when the disambiguation of a multimodal behavior depends on the contextual interactional setting. Afterwards, we propose the context incorporation framework based on semantic concept detection, extraction enrichment and representation in cognitive estimation. And finally, we provide relevant analysis and conclude to future work that needs to be developed. In future work, we would like to explore the interpretation of contextual effects on affect production, interpretation and analysis respectively.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: Number of variables that have been considered so far as contextual parameters in the area of Affective Computing.</figDesc><graphic coords="3,169.35,115.84,276.67,160.47" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: System's Overview: (a) We discover semantic context concepts from verbal content (semantics) associated with SEMAINE dataset and (b) represent each one with multi-word expressions, enhanced with sentiment values (c) to enrich a number of Psychological Foundations (d).We finally show that this proposed approach could show a clear connection between semantics, cognitive and affective information prediction (e).</figDesc><graphic coords="5,177.99,115.84,259.38,151.35" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The window length corresponds to 16 conversational turns and is displayed on figures for future visualization purposes.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">http://nlp.stanford.edu:8080/parser/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://nltk.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">To avoid the Sentiwordnet's multi-interpretations a combination of the following methods have been examined: a) Pos tagging to reduce the number of candidate senses, b) Cosine similarity between the sentence and the gloss of each sense of the word in WordNet and c) the "SenseRelate" method to measure the "WordNet similarity" between different senses of the target word and its surrounding words.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">In the SEMAINE corpus, situation is determined by the user's response while in RECOLA corpus the situational context is defined by the roles during the collaboratorive, intensive and interactional task.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This research has been co-financed by the European Union (European Social Fund -EFS) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) -Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Towards a better understanding of context and context-awareness</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Abowd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Dey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Davies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Steggles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handheld and Ubiquitous Computing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="304" to="307" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Context-aware computing: The cyberdesk project</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Dey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">aaai 1998 spring symposium on intelligent environments</title>
				<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Duric</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">D</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Heishman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rosenfeld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Schoelles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schunn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wechsler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE</title>
				<meeting>the IEEE</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">90</biblScope>
			<biblScope unit="page" from="1272" to="1289" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Sentiwordnet: A publicly available lexical resource for opinion mining</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06)</title>
				<meeting>the 5th Conference on Language Resources and Evaluation (LREC06)</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="417" to="422" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Bidirectional lstm networks for improved phoneme classification and recognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Graves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications -Volume Part II</title>
				<meeting>the 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications -Volume Part II</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="799" to="804" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Categorical and dimensional affect analysis in continuous input: Current trends and future directions</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schuller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Image and Vision Computing</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="120" to="136" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Conditional random fields: Probabilistic models for segmenting and labeling sequence data</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Lafferty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mccallum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">C N</forename><surname>Pereira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighteenth International Conference on Machine Learning</title>
				<meeting>the Eighteenth International Conference on Machine Learning</meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers Inc</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="282" to="289" />
		</imprint>
	</monogr>
	<note>ICML &apos;01</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent</title>
		<author>
			<persName><forename type="first">G</forename><surname>Mckeown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Valstar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cowie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pantic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schroder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Affective Computing</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="5" to="17" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Wordnet: a lexical database for english</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="39" to="41" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Introducing the recola multimodal corpus of remote collaborative and affective interactions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Ringeval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sonderegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lalanne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">10th International Conference on Automatic Face and Gesture Recognition (FG)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Hci and natural progression of context-related questions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vlachostergiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Caridakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Raouzaiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kollias</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human-Computer Interaction: Design and Evaluation</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="530" to="541" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
