<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The CL-Aff Happiness Shared Task: Results and Key Insights</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Kokil</forename><surname>Jaidka</surname></persName>
							<email>jaidka@ntu.edu.sg</email>
							<affiliation key="aff0">
								<orgName type="institution">Nanyang Technological University</orgName>
								<address>
									<country key="SG">Singapore</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">University of Pennsylvania</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Saran</forename><surname>Mumick</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Pennsylvania</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">Megagon Labs</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Niyati</forename><surname>Chhaya</surname></persName>
							<affiliation key="aff3">
								<orgName type="institution">Adobe Research</orgName>
								<address>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lyle</forename><surname>Ungar</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Pennsylvania</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The CL-Aff Happiness Shared Task: Results and Key Insights</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A56A56BAC1F4075D2715EE46F240B514</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T03:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This overview describes the official results of the CL-Aff Shared Task 2019 -in Pursuit of Happiness. The dataset comprised a semi-supervised classification task and an open-ended knowledge modeling task on a dataset of over 80,000 brief autobiographical accounts of happy moments, crowdsourced from Amazon Mechanical Turk. The Shared Task was organized as a part of the 2 nd Workshop on Affective Content Analysis @ AAAAI-19, held in Honolulu, USA on January 27, 2019. This paper compares the participating systems in terms of their accuracy and F-1 scores at predicting two facets of happiness. The complete annotated dataset is available on Harvard Dataverse at https: //goo.gl/3rcZqf. The annotation instructions and the scripts used for evaluation are available at the Git repository at https://github.com/ kj2013/claff-happydb.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The purpose of the CL-Aff Shared Task is to challenge the current understanding of emotion through a task that models the experiential, contextual and agentic attributes of happy moments. It has long been known that human affect is context-driven, and that labeled datasets should account for these factors in generating predictive models of affect. The Shared Task is organized in collaboration with researchers at Megagon Labs and builds upon the HappyDB dataset <ref type="bibr" target="#b0">[1]</ref>, comprising human accounts of 'happy moments'. The Shared Task comprised of two sub-tasks for analyzing happiness and well-being in written language, on a corpus of over 80,000 descriptions of happy moments, as described here: Given: An account of a happy moment, marked with individual's demographics, recollection time and relevant labels.</p><p>-Task 1: Semi-Supervised classification task -Predict thematic labels (Agency/Sociality) on unseen data, based on a small labeled and large unlabeled training data. <ref type="foot" target="#foot_0">5</ref>-Task 2: Suggest interesting ways to automatically characterize the happy moments in terms of affect, emotion, participants and content.</p><p>The task, given its predictive and open-ended interpretive aspects is relevant for the computational linguistics, natural language processing, artificial intelligence and the psycholinguistics communities. The aim is to engage scholarly interest and crowdsource new ideas and linguistic approaches to define happiness. Details on the psycholinguistic underpinnings of the annotation task are provided in a different, forthcoming paper <ref type="bibr" target="#b4">[5]</ref>.</p><p>Evaluation: The performance of Systems was compared based on their Accuracy and F-1 measure at predicting the Agency and Sociality labels on the unseen test dataset. This was done using an automatic evaluation script, available on Github<ref type="foot" target="#foot_1">6</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Dataset description</head><p>The CL-Aff corpus comprises the following:</p><p>-Labeled training set (N = 10,560): Single-sentence happy moments from the available HappyDB corpus, annotated with demographic labels of the author, as well as labels that identify the 'agency' of the author and the 'social' characteristic of the moment, as well as concept labels describing its theme. Authors' demographic labels were available to the Shared Task participants but not the 'agency' or 'social' characteristics.</p><p>The Agency and Sociality characteristics of each happy moment were decided by a simple majority agreement between three independent annotators using a binary (yes/no) coding.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Corpus development</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Collecting the happy moments</head><p>We followed the format of the original HappyDB AMT task <ref type="bibr" target="#b0">[1]</ref> to collect a second dataset of 20,000 happy moments, which was to be the unseen test data in the CL-Aff Shared Task. The following instructions were provided to the workers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Instructions</head><p>What made you happy? Reflect on the past &lt;duration&gt;, and recall three actual events that happened to you that made you happy. Describe your happy moments with a complete sentence. Write three such moments. You will also be asked to note for how long each event made you happy. This task also has post-task questions. Please be sure to answer the questions. Examples of happy moments we are NOT looking for (e.g., events in distant past, incomplete sentence): The day I married my spouse; My dog. &lt; Enter moment here &gt; For how long did that event make you happy? Select the answer that is most appropriate.</p><p>Each AMT worker was required to enter three happy moments experienced within a specific time period. Half of the questionnaires specified a time period of 24 hours, while the other half with a &lt;time period&gt; of 3 months. The options provided for the follow-up question about the duration (i.e., the length) of happiness were 'All day, I'm still feeling it,' 'Half a day,' 'At least one hour,' 'A few minutes' or 'Not Applicable.' After the participant answered these questions, demographic information was collected about their country, age, gender ('Male','Female','Other','Not Applicable'), marital status ('single', 'married', 'divorced', 'separated', 'widowed' or 'Not Applicable'), and whether or not they have children ('yes','no').</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Annotation</head><p>Annotators were required to annotate each moment along two binary dimensions -Agency and Sociality. We draw from Paulhas' conceptualization of selfpresentation according to the two factors of Agency and Communion <ref type="bibr" target="#b6">[7]</ref>. Previous work exploring the evidence of agency in writing has adapted it to mean their locus of control, or the degree to which an author in control of their surroundings <ref type="bibr" target="#b8">[9]</ref>. Sociality conceptualizes interpersonal engagement, evinced in writing as the description of any activity performed with or in the company of others <ref type="bibr" target="#b5">[6]</ref>.</p><p>Instructions Read the following happy moment. Choose any of the following that applies:</p><p>Agency: Is the author in control? YES/NO Examples of sentences where the author is in control (Answer is YES):</p><p>-"I ran on the treadmill for 20 minutes straight when I could barely do 5 minutes 3 months ago." -"Going out to a special birthday lunch for my great-grandmother in law's birthday."</p><p>Examples of sentences where the author is not in control (Answer is NO):</p><p>-"My youngest daughter got accepted to many prestigious universities and accepted an offer to attend college in San Diego."</p><p>-"A small business deal change over for small profit."</p><p>Social: Does this moment involve other people other than the author? YES/NO Please note that objects (e.g., bus, work) should not be counted as social.</p><p>Examples of sentences which involve other people (Answer is YES):</p><p>-"Going out to a special birthday lunch for my great-grandmother in law's birthday." -"My youngest daughter got accepted to many prestigious universities and accepted an offer to attend college in San Diego."</p><p>Note that sometimes a person is implicitly involved although not explicitly mentioned. In this case, we still wish to label the happy moment as social. E.g., "I received compliments on my tattoo."</p><p>Examples of sentences which are not social (Answer is NO):</p><p>-"I ran on the treadmill for 20 minutes straight when I could barely do 5 minutes 3 months ago." -"The bus came on time, so I reached work early." &lt;Happy moment appears here&gt; Agency: Is the author in control? YES/NO Social: Does this moment involve other people other than the author? YES/NO</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Topic labeling</head><p>Annotators were presented with a happy moment, and a set of four potential topics which it was likely describing. Annotators were asked to mark all the tags which referred to what it was about. Each moment could score a maximum of four tags if at least two annotators agreed on them.</p><p>Instructions Read the following text. Select all categories that are relevant to the text from among those provided. If none of the categories is a great fit, select "none of the above" &lt;Topic 1&gt; &lt;Topic 2&gt; &lt;Topic 3&gt; &lt;Topic 4&gt;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Overview of Approaches</head><p>Eleven teams participated in the Shared Task. The following paragraphs discuss the approaches followed by the participating systems, sorted in the order in which they signed up to participate in the task.</p><p>-Arizona State University (ASU) <ref type="bibr" target="#b9">[10]</ref>: The team from ASU proposed a Word Pair Convolutional Model (WoPCoM) to accomplish Task 1. The proposed model is motivated by the hypothesis that a small set of word-pair features are important to capturing the agency/social nature of happy moments. They trained a convolutional neural network (CNN) to predict on the unlabeled data. -University of California Santa Cruz (UCSC) <ref type="bibr" target="#b14">[15]</ref>: The UCSC team participated in both tasks. For Task 1, they explored the use of syntactic, emotional, and survey features with semi-supervised learning, specifically experimenting with XGBoosted Forest and CNN models. For Task 2, the team trained similar models to predict concepts, and based on the difficulty of doing so, hypothesized about the nature of the themes in the happy moments. -International Institute of Information Technology Hyderabad (IIIT-H) <ref type="bibr" target="#b11">[12]</ref>:</p><p>The IIIT-H team employed an inductive transfer learning technique (ITL). They pre-trained a AWD-LSTM neural net on the WikiText-103 corpus, and then introduced an extra step to adapt the model to Happy moments. -Gyrfalcon <ref type="bibr" target="#b10">[11]</ref>: The team from Gyrfalcon Technology, California, proposed an algorithm to map English words into squared glyphs images. Then, they applied a 2D-CNN model over these images in order to capture the sentiment. -A*STAR <ref type="bibr" target="#b3">[4]</ref>: The IHPC-A*STAR team participated in both tasks. For Task 1, they used emotion intensity in happy moments to predict agency and sociality labels. They defined a set of five emotions (valence, joy, anger, fear, sadness) and use a previously developed tool, CrystalFeed, to label each moment with the corresponding five emotion intensities. Combining these features with additional word-embedding features, they trained a logistic regression model. For Task 2, the team explored how these different emotions are manifested across the different concept labels. -University of British Columbia (UBC) <ref type="bibr" target="#b7">[8]</ref>: The UBC team primarily experimented with different embedding methods, such as CoVe and ELMo, on deep neural networks. They modeled their neural networks as long shortterm memory networks and BiLSTM, with and without attention. -University of Ottawa (UOttawa) <ref type="bibr" target="#b15">[16]</ref>: The University of Ottawa team also proposed a deep learning CNN solution. They experimented using different kind of word embeddings, and also experimented with training a multi-task classifier to see whether performance could be enhanced by shared knowledge between agency and sociality. -Escuela Superior Politecnica del Litoral (ESPOL) <ref type="bibr" target="#b13">[14]</ref>: The ESPOL team proposed a semi-supervised adaptation to traditional k-means clustering using neural networks. -Sungkyunkwan team (SKKU) <ref type="bibr" target="#b1">[2]</ref>: The SKKU team used a semi-supervised approach. They built four one-class autoencoder models, one for social, nonsocial, agentic, and non-agentic moments. Each autoencoder model had a deep learning architecture consisting of two neural networks, one for encoding the the input, and the other for reconstructing the compressed vector. -Jordan University of Science and Technology (JUST) <ref type="bibr" target="#b12">[13]</ref>: The JUST team proposed used a Recurrent Convolutional Neural Network, and combined words with their context in order to get a more precise word embedding.</p><p>-Fraunhofer (FKIE) <ref type="bibr" target="#b2">[3]</ref>: The team from Fraunhofer FKIE trained a three-layer CNN. They experimented with using different embeddings including Fast-Text and GloVe. Additionally, they experimented with splitting the dataset by demographic location of the author, and showed that training separate classifiers on the splits enhanced performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Task 1: Predicting Agency and Sociality</head><p>This section compares the participating systems in terms of their performance. Four of the eleven systems that did Task 1 also did the bonus Task 2. The results are provided in Table <ref type="table">1</ref>. The detailed implementation of the individual runs are described in the system papers included in this proceedings volume.</p><p>Fig. <ref type="figure">1</ref>: Accuracy scores for the best performing system runs on CL-Aff Task 1 for each of the participating teams</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Task 2: Happiness Insights</head><p>Some of the systems used their neural models of happiness for Task 1 to produce visual knowledge representations <ref type="bibr" target="#b10">[11]</ref>, and general insights about happiness <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b2">3]</ref>. Most notably, Gyrfalcon <ref type="bibr" target="#b10">[11]</ref> transformed textual moments into visualizations to explore whether they could encode more multi-dimensional information in this manner. UBC <ref type="bibr" target="#b7">[8]</ref> provided a visualization for "attention" in their bi-directional long short-term memory networks which highlights the patterns that were considered important by the neural network while predicting Agency and Sociality when a sequence of words was input into the model. ASU <ref type="bibr" target="#b9">[10]</ref> showed the codependence of the individual Agency and Sociality labels across the dataset through a t-SNE visualization. Team 33 <ref type="bibr" target="#b2">[3]</ref> and UCSC <ref type="bibr" target="#b14">[15]</ref> both attempted to capture the linguistic patterns in the construction of happiness and their potential cultural underpinnings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Error Analysis</head><p>In this section, we present a meta-analysis of system performances for Task 1 over all the (a) topics and (b) moments in the test set. Furthermore, in their Table <ref type="table">1</ref>: Systems' performance in Task 1, ordered by their accuracy on predicting Agency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Agency Sociality System</head><p>Accuracy F1 Accuracy F1 UBC <ref type="bibr" target="#b7">[8]</ref> .85 .9</p><p>.92 .93 ASU <ref type="bibr" target="#b9">[10]</ref> .85 .89 .91 .92 IIIT-H <ref type="bibr" target="#b11">[12]</ref> .84 .89 .92 .93 JUST <ref type="bibr" target="#b12">[13]</ref>  Run 1 Supervised CNN (2,3,4,5) on GloVe features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 2 Semi-Supervised CNN (2,3,4) on GloVe features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 3 Semi-Supervised CNN (2,3,4,5) on GloVe features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 4 Supervised CNN <ref type="bibr" target="#b1">(2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4)</ref> on GloVe features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 5 Supervised CNN (2,3,4) on GloVe, syntactic and emotionl features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 6 Semi-Supervised CNN (2,3,4,5) on GloVe features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 7 Supervised XGBoosted Forest on syntactic and emotion features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 8 Semi-Supervised SGBoosted Forest on syntactic and emotion features UCSC <ref type="bibr" target="#b14">[15]</ref> Run 9 Semi-Supervised CNN (2,3,4) on GloVe, syntactic and emotion features UOttawa <ref type="bibr" target="#b15">[16]</ref> CNN Multi-task learning on GloVe UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 1 50 dimensions, 10 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 2 100 dimensions, 10 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 3 25 dimensions, 10 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 4 200 dimensions, 10 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 5 25 dimensions, 50 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 6 50 dimensions, 50 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 7 100 dimensions, 50 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 8 200 dimensions, 50 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 9 50 dimensions, 200 epochs UOttawa <ref type="bibr" target="#b15">[16]</ref> Run 10 200 dimensions, 100 epochs data pre-processing step, the team from Fraunhofer <ref type="bibr" target="#b2">[3]</ref> identified that in the subset of happy moments contributed by authors from India alone, there were duplicate or near-duplicate happy moments in the data, which reduced the total number of training samples by 25%. We will include data cleaning as an extra preprocessing step in future data releases.</p><p>Topic-level analysis: We expect that happiness in different situations would be experienced and expressed differently. Table <ref type="table" target="#tab_1">3</ref> aggregates the failures produced by each of the approaches (out of the set of best approaches submitted by each of the teams).</p><p>Moment-level meta-analysis: We suspect that some of the errors in our data may occur due to mislabeling or the coding scheme not being applicable to the moment. In Table <ref type="table" target="#tab_2">4</ref> we provide the happy moments for which 100% of the best approaches submitted by each of the teams reported failure. We observe that in some of the cases, (e.g., "Topanga running away to Cory"), the happy moment was actually mislabeled and thus the systems actually did have the correct prediction. Overall, many of the happy moments in this Table describe a single moment in the author's life, which seem ordinary when considered in the context of regular living. In some cases, the authors have attempted to explain why this moment was special to them (e.g., the second part of the moment "I finally got a hold of my auto mechanic, and that enabled me to schedule a time to bring in my car to get my custom exhaust installed" only serves to explain the significance of the moment to the author.) Agency (10% noise) I was given off phone work for the day. i was promoted in my job and i felt so appreciated and happy. Children of different races playing together at pool. I spoke to a friend on the phone that I hadn't heard from in two months. I realized earlier today that last month I made twice as much as my income for the same month the previous year. I won a new lawnmower, which I desperately needed, at a local event. My son singing songs with me. seeing a co worker I was able to talk to my boyfriend on the phone for an hour even though he is on vacation. i slept well last night -no nightmares I got a raise at work My father gifted me a car on my birthday that left me surprised and extremely happy. I won a lucky draw two weeks back to a five star resort for two days, which made me really exiting and joyfull.</p><p>Social (5% noise) I was applying for jobs for many months and finally got an interview and a offer later. I learned that we're moving into a bigger, better apartment for less money. I won the first prize in cricket match. I was happy when I was offered to work on a new television show Having my hair cut and it turning out just the way I wanted it to look. I had a job interview this morning and I think it went well. Topanga running away to Cory. Being brought McDonald's for lunch. I was gifted a very nice bottle of wine. I got an iphone 7 as gift I forgot my credit card and was given a free chicken biscuit and drink anyway. A package I ordered that I was anticipating was delivered to me. I received approval to take a month off of work to go on a backpacking trip. Got a small raise at work and even it doesn't amount to much per check, it's still something. When i return home my house was very clean and nice,and the yard was mowed.</p><p>Eleven teams participated in the inaugural CL-Aff Shared Task AAAI-19. We have published the complete dataset to Harvard Dataverse<ref type="foot" target="#foot_2">8</ref> . Furthermore, we expect to release other resources complementary to the challenges of modeling affect and emotion language from language.</p><p>In summary, our meta-analysis of system performance identifies the following key takeaways and recommendations:</p><p>-Predictive modeling approaches are greatly improved when modeled as a semi-supervised task, enriched with unlabeled data or by knowledge or feature vectors trained from a different domain. This also highlights the generalizability of the Shared Task to other domains. -Syntactic knowledge is important for modeling Agency and Sociality (and hence, for modeling happiness). Participants incorporated the importance of the head noun and subject-verb-object word order in their language models either through interacting layers in convolution neural networks, or by mining it using lexical pattern analyses methods. -The CL-Aff dataset offers replicability of more traditional emotion modeling approaches. It was feasible to apply the models developed on other annotated emotion datasets to improve the predictive modeling performance on the Shared Task <ref type="bibr" target="#b3">[4]</ref>. We anticipate that language models from the CL-Aff dataset will also generalize well to other problems and datasets for emotion and affect analysis. -In future work, scholars could consider training their classifiers based on domain-specific word embeddings derived from the Shared Task dataset itself. -Findings support the emerging notion about the English language as a contextualized emotional vector space, with the best performances reported by approaches that incorporated task-specific embeddings from other language models, such as ELMo and CoVe.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>-</head><label></label><figDesc>Unlabeled training set (N = 59,846: The remaining single-sentence HappyDB happy moments with only the demographic labels of the author. -Test set: (N = 17,215) Previously unreleased, single-sentence happy moments, freshly collected in the same manner as the original HappyDB data.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2 :</head><label>2</label><figDesc>Legend for Task 1 System Runs.</figDesc><table><row><cell>Run 10</cell><cell>.83</cell><cell>.89</cell><cell>.91</cell><cell>.92</cell></row><row><cell>UCSC [15] Run 1</cell><cell>.83</cell><cell>.88</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UCSC [15] Run 2</cell><cell>.83</cell><cell>.88</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UCSC [15] Run 3</cell><cell>.83</cell><cell>.88</cell><cell>.89</cell><cell>.9</cell></row><row><cell>JUST [13] Run 1</cell><cell>.83</cell><cell>.88</cell><cell>.91</cell><cell>.91</cell></row><row><cell>JUST [13] Run 4</cell><cell>.83</cell><cell>.88</cell><cell>.91</cell><cell>.92</cell></row><row><cell>JUST [13] Run 5</cell><cell>.83</cell><cell>.88</cell><cell>.9</cell><cell>.91</cell></row><row><cell>JUST [13] Run 7</cell><cell>.82</cell><cell>.88</cell><cell>.91</cell><cell>.92</cell></row><row><cell>JUST [13] Run 3</cell><cell>.82</cell><cell>.88</cell><cell>.9</cell><cell>.91</cell></row><row><cell>JUST [13] Run 2</cell><cell>.82</cell><cell>.88</cell><cell>.91</cell><cell>.92</cell></row><row><cell>ESPOL [14]</cell><cell>.82</cell><cell>.87</cell><cell>.9</cell><cell>.91</cell></row><row><cell>JUST [13] Run 6</cell><cell>.82</cell><cell>.87</cell><cell>.9</cell><cell>.91</cell></row><row><cell>UCSC [15] Run 4</cell><cell>.82</cell><cell>.87</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UOttawa [16] Run 1</cell><cell>.82</cell><cell>.88</cell><cell>.9</cell><cell>.91</cell></row><row><cell>A*STAR [4] Run 3</cell><cell>.81</cell><cell>.88</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UOttawa [16] Run 2</cell><cell>.81</cell><cell>.87</cell><cell>.89</cell><cell>.91</cell></row><row><cell>JUST [13] Run 8</cell><cell>.81</cell><cell>.87</cell><cell>.89</cell><cell>.91</cell></row><row><cell>UOttawa [16] Run 3</cell><cell>.81</cell><cell>.86</cell><cell>.88</cell><cell>.9</cell></row><row><cell>UOttawa [16] Run 4</cell><cell>.8</cell><cell>.86</cell><cell>.88</cell><cell>.9</cell></row><row><cell>JUST [13] Run 9</cell><cell>.8</cell><cell>.85</cell><cell>.9</cell><cell>.91</cell></row><row><cell>UOttawa [16] Run 5</cell><cell>.8</cell><cell>.86</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UOttawa [16] Run 6</cell><cell>.8</cell><cell>.87</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UOttawa [16] Run 7</cell><cell>.8</cell><cell>.86</cell><cell>.88</cell><cell>.89</cell></row><row><cell>UOttawa [16] Run 8</cell><cell>.8</cell><cell>.86</cell><cell>.89</cell><cell>.9</cell></row><row><cell>GYRFALCON [11]</cell><cell>.8</cell><cell>.86</cell><cell>.82</cell><cell>.83</cell></row><row><cell>UOttawa [16] Run 9</cell><cell>.79</cell><cell>.85</cell><cell>.88</cell><cell>.9</cell></row><row><cell>UCSC [15] Run 5</cell><cell>.79</cell><cell>.86</cell><cell>.59</cell><cell>.62</cell></row><row><cell>UOttawa [16] Run 10</cell><cell>.79</cell><cell>.85</cell><cell>.88</cell><cell>.89</cell></row><row><cell>UCSC [15] Run 6</cell><cell>.79</cell><cell>.86</cell><cell>.7</cell><cell>.74</cell></row><row><cell>A*STAR [4] Run 2</cell><cell>.78</cell><cell>.83</cell><cell>.89</cell><cell>.9</cell></row><row><cell>A*STAR [4] Run 1</cell><cell>.78</cell><cell>.83</cell><cell>.89</cell><cell>.9</cell></row><row><cell>FRAUNHOFER [3] Run 4 4</cell><cell>.77</cell><cell>.84</cell><cell>.65</cell><cell>.73</cell></row><row><cell>FRAUNHOFER [3] Run 1</cell><cell>.76</cell><cell>.85</cell><cell>.59</cell><cell>.62</cell></row><row><cell>UCSC [15] Run 7</cell><cell>.76</cell><cell>.85</cell><cell>.89</cell><cell>.9</cell></row><row><cell>UCSC [15] Run 8</cell><cell>.76</cell><cell>.85</cell><cell>.89</cell><cell>.9</cell></row><row><cell>FRAUNHOFER [3] Run 3 3</cell><cell>.76</cell><cell>.84</cell><cell>.65</cell><cell>.74</cell></row><row><cell>FRAUNHOFER [3] Run 5</cell><cell>.76</cell><cell>.82</cell><cell>.62</cell><cell>.68</cell></row><row><cell>SKKU [2] 7 Run 1</cell><cell>.76</cell><cell>.84</cell><cell>.87</cell><cell>.88</cell></row><row><cell>SKKU [2] Run 2</cell><cell>.76</cell><cell>.84</cell><cell>.87</cell><cell>.88</cell></row><row><cell>FRAUNHOFER [3] Run 2</cell><cell>.75</cell><cell>.82</cell><cell>.59</cell><cell>.62</cell></row><row><cell>FRAUNHOFER [3] Run 6</cell><cell>.75</cell><cell>.84</cell><cell>.61</cell><cell>.65</cell></row><row><cell>UCSC [15] Run 9</cell><cell>.74</cell><cell>.83</cell><cell>.61</cell><cell>.66</cell></row><row><cell>SKKU [2] Run 4</cell><cell>.39</cell><cell>.49</cell><cell>.87</cell><cell>.88</cell></row><row><cell>SKKU [2] Run 3</cell><cell>.39</cell><cell>.49</cell><cell>.87</cell><cell>.88</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3 :</head><label>3</label><figDesc>Topic-level error Analysis: % of the approaches that failed on predicting Agency and Sociality in different topics.</figDesc><table><row><cell></cell><cell></cell><cell>% Failure</cell><cell>% Failure</cell></row><row><cell>Concept</cell><cell>N</cell><cell>(Agency)</cell><cell>(Sociality)</cell></row><row><cell>Career</cell><cell>2186</cell><cell>27.27</cell><cell>27.27</cell></row><row><cell>Party</cell><cell>839</cell><cell>27.27</cell><cell>36.36</cell></row><row><cell>Education</cell><cell>985</cell><cell>18.18</cell><cell>27.27</cell></row><row><cell>Family</cell><cell>5149</cell><cell>18.18</cell><cell>27.27</cell></row><row><cell>Animals</cell><cell>882</cell><cell>18.18</cell><cell>36.36</cell></row><row><cell>Religion</cell><cell>105</cell><cell>18.18</cell><cell>36.36</cell></row><row><cell cols="2">Conversation 1336</cell><cell>18.18</cell><cell>27.27</cell></row><row><cell>Romance</cell><cell>1325</cell><cell>18.18</cell><cell>27.27</cell></row><row><cell>Weather</cell><cell>251</cell><cell>18.18</cell><cell>18.18</cell></row><row><cell>Vacation</cell><cell>1061</cell><cell>9.09</cell><cell>27.27</cell></row><row><cell cols="2">Entertainment 2284</cell><cell>9.09</cell><cell>27.27</cell></row><row><cell>Food</cell><cell>2402</cell><cell>9.09</cell><cell>18.18</cell></row><row><cell>Shopping</cell><cell>1290</cell><cell>9.09</cell><cell>18.18</cell></row><row><cell>Technology</cell><cell>477</cell><cell>9.09</cell><cell>18.18</cell></row><row><cell>Exercise</cell><cell>842</cell><cell>9.09</cell><cell>18.18</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4 :</head><label>4</label><figDesc>Moment-level error Analysis: 100% of the best approaches submitted by each of the eleven teams, failed at predicting Agency and Sociality in these happy moments.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_0">In the annotation task and the Shared Task, the label names we provided were 'Agency' and 'Social'. We have since renamed 'Social' to 'Sociality' so that both Agency and Sociality can be grammatically consistent.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_1">https://github.com/kj2013/claff-happydb/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_2">DOI:10.7910/DVN/JZAS66; https://goo.gl/3rcZqf</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgement. We thank Dr. Wang-Chiew Tan for her feedback and Megagon Labs for contributing funds towards the CL-Aff dataset.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Happydb: A corpus of 100,000 crowdsourced happy moments</title>
		<author>
			<persName><forename type="first">A</forename><surname>Asai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Evensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Golshan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Halevy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lopatenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Suhara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">C</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of LREC 2018. European Language Resources Association (ELRA)</title>
				<meeting>LREC 2018. European Language Resources Association (ELRA)<address><addrLine>Miyazaki, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-05">May 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">cl-aff shared task] modeling happiness using one-class autoencoders</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">G</forename><surname>Cheong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Bae</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Affective content classification using convolutional neural networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Claeser</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">What constitutes happiness? predicting and characterizing the ingredients of happiness using emotion intensity analysis</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhattacharya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Towards a typology of happiness: The cl-aff annotated dataset of happy moments</title>
		<author>
			<persName><forename type="first">K</forename><surname>Jaidka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Chhaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mumick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Killingsworth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Halevy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ungar</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Measures of personality and social psychological attitudes</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Paulhus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Shaver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Wrightsman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Measures of social psychological attitudes series</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="17" to="59" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Self-presentation of personality</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Paulhus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">D</forename><surname>Trapnell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Handbook of personality psychology</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="492" to="517" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Happy together: Learning and understanding appraisal from natural language</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rajendran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdul-Mageed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Identifying locus of control in social media language</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rouhizadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jaidka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Buffone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ungar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2018 Conference on Empirical Methods in Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Word pair convolutional model for happy moment classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Saxon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhandari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ruskin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Honda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">cl-aff shared task] squared english word: A method of generating glyph to use super characters for sentiment analysis</title>
		<author>
			<persName><forename type="first">B</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Ingredients for happiness: Modeling constructs via semi-supervised content driven inductive transfer learning</title>
		<author>
			<persName><forename type="first">B</forename><surname>Syed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Indurthi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Varma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Ioh-rcnn: Pursuing the ingredients of happiness using recurrent convolutional neural networks</title>
		<author>
			<persName><forename type="first">B</forename><surname>Talafha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Al-Ayyoub</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Neural semi-supervised learning for short-texts</title>
		<author>
			<persName><forename type="first">J</forename><surname>Torres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Vaca</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Cruzaffect at affcon 2019 shared task: A feature-rich approach to characterize happiness</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Compton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rakshit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Walker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Whittaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (Af-fCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">cl-aff shared task] happiness ingredients detection using multi-task deep learning</title>
		<author>
			<persName><forename type="first">W</forename><surname>Xin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Inkpen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)</title>
				<meeting>the 2nd Workshop on Affective Content Analysis @ AAAI (AffCon2019)<address><addrLine>Honolulu, Hawaii</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-01">January 2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
