<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">CERTH @ MediaEval 2014 Social Event Detection Task</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Marina</forename><surname>Riga</surname></persName>
							<email>mriga@iti.gr</email>
						</author>
						<author>
							<persName><forename type="first">Georgios</forename><surname>Petkos</surname></persName>
							<email>gpetkos@iti.gr</email>
						</author>
						<author>
							<persName><forename type="first">Symeon</forename><surname>Papadopoulos</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Emmanouil</forename><surname>Schinas</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Yiannis</forename><surname>Kompatsiaris</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Information Technologies Institute</orgName>
								<orgName type="institution" key="instit2">CERTH</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">Charilaou-Thermis</orgName>
								<address>
									<addrLine>th Km</addrLine>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">CERTH @ MediaEval 2014 Social Event Detection Task</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F6C735B8E189FA7EEC6763648CE16F19</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T16:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes the participation of CERTH in the Social Event Detection Task of MediaEval 2014. For Challenge 1, we use a "same event model" to construct a graph on which we perform community detection to obtain the final clustering. Importantly, we tune the model to have a higher true positive rate than true negative rate, leading to significantly improved performance. The F1 score and NMI for our best run are 0.9161 and 0.9818, respectively. For Challenge 2, we developed probabilistic language models to classify events according to the criteria of the different queries. Our best run on Challenge 2 achieved an average F-score of 0.4604.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>The paper presents the approaches developed by CERTH for the two Challenges of the MediaEval 2014 Social Event Detection (SED) task. Challenge 1 asks for a full clustering of a collection of Flickr images, so that each cluster corresponds to a social event. Challenge 2 examines a retrieval scenario in which, given a set of social events, the goal is to determine those events that match particular criteria. More details about the task can be found in <ref type="bibr" target="#b3">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">PROPOSED APPROACH 2.1 Overview of method in Challenge 1</head><p>Our approach for Challenge 1 utilizes what is termed the Same Event Model (SEM) <ref type="bibr" target="#b2">[2]</ref>. The SEM takes as input the set of per modality similarities between two items and predicts how likely it is that these two items belong to the same event or not. Subsequently, a graph is constructed, in which the nodes represent the images to be clustered and the existence of an edge between a pair of nodes denotes the positive prediction of the SEM for the two respective images. Finally, a community detection algorithm is performed on the graph to obtain a full clustering. Moreover, in order to limit the number of evaluations of the SEM and make the approach scalable, we deploy a candidate neighbour selection step: for each image we utilize appropriate indices in order to obtain the most similar images according to each modality and evaluate the SEM only for them. This is a technique that is commonly referred to as blocking. This overall approach is similar to that of <ref type="bibr" target="#b5">[5]</ref> and that which we deployed in last year's task <ref type="bibr">[6]</ref>. Importantly though, we introduce a tweak which improves performance significantly. The key idea is that false positive and false negative predictions of the SEM Copyright is held by the author/owner(s).</p><p>MediaEval 2014 Workshop, October 16-17, 2014, Barcelona, Spain are not equally important. More specifically, the average size of an event in the training set is roughly 20 images. In practice though, the set of candidate neighbours needs to be quite larger than the average. For instance, in our experiments we experimented with at most 500 candidate neighbours. The primary reasons for this is that a) the distribution of the sizes of the events is much wider and b) in large datasets one needs to consider a larger number of candidate neighbours in order to have higher confidence that the actual neighbours of some image appear in the set of candidate neighbours. Therefore, since the number of candidate neighbours will be much larger than the number of actual neighbours, and assuming that the classifier has been trained to achieve similar true positive and true negative rates, we can expect that the SEM will give a significantly larger number of false positive predictions than false negative predictions. Too many false positive predictions are likely to result in a lot of merged clusters as they will create too many incorrect edges in the graph. If on the other hand we opt for a higher true positive rate at the cost of a lower true negative rate (by increasing the classification threshold), we will have far fewer incorrectly merged clusters, but will also have some fragmented clusters. The way to deal with this problem is to increase the set of candidate neighbours. In our experiments, we observed that when increasing the threshold so that the true positive rate is 0.9999, the true negative rate does not drop below 0.95, which in practice appears sufficient for our purpose.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Overview of method in Challenge 2</head><p>In Challenge 2, we utilize regularized unigram language models <ref type="bibr" target="#b1">[1]</ref> to classify clusters (or images in Run 5, as will be explained later) according to the given retrieval criteria (location, type of event, entities involved). For learning the language models for the event types and entities of interest we collected sets of images from Flickr using the relevant keywords that appear in the queries. Moreover, we retrieved an additional random collection of images, in order to learn a general language model that does not focus on any particular event type or entity, against which the type-or entity-specific language models are compared. For some cluster (or image) i the comparison is performed by computing the ratio of the probability given by the specific language model p specif ic (i) over the probability given by the general language model p general (i); if the ratio is above some threshold θ, then we assign the event (or image) as matching the examined criterion. In a second variation we utilize a language model that has trained both with the type and entity specific datasets and the general dataset and com- pute the ratio p specif ic,general (i)/p general (i). For inferring location we adopted the per grid-cell language model based approach of <ref type="bibr" target="#b4">[4]</ref>. It should be noted though that for clusters that contain geotagged images, we do not use the language models, but rather use the explicit coordinates to estimate the location.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">EXPERIMENTS 3.1 Runs description in Challenge 1</head><p>In all runs of Challenge 1 we utilized a SVM classifier to learn the SEM. The following features were used to compute the input to the SEM for a pair of images: user (1 if both images have been uploaded by the same user, 0 otherwise), textual (title, tags and description, similarity computed using BM25 and cosine), taken and upload time, spatial (if available) and visual information (SURF descriptors aggregated using a VLAD scheme <ref type="bibr" target="#b8">[8]</ref> as well as features extracted using Overfeat <ref type="bibr" target="#b7">[7]</ref>, a popular convolutional net, similarity for both is computed using Euclidean distance). In Run 1 we apply our basic approach, without using any visual features and we take the predictions of the SEM as they are, i.e. we do not change the classification threshold. In Run 2 we only add the visual features. In Run 3 we use the probabilities that are provided by the SVM classifier and set the threshold to 0.995, achieving the true positive and true negative rates that were mentioned earlier. In Run 4 we attempt to improve the results by increasing the set of candidate neighbours: after the graph has been constructed by predicting the SEM output for each image's candidate neighbours, we add to the candidate neighbours of each image the neighbours of its actual neighbours and predict the output of the SEM for them as well. In Run 5 we do not use blocking and compute the output of the SEM for all pairs of images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Runs description in Challenge 2</head><p>In Run 1 of Challenge 2 we perform the classification by computing the ratio p specif ic (i)/p general (i) and setting the threshold θ to 1. In Run 2, we perform the classification by computing the ratio p specif ic,general (i)/p general (i) and again setting the threshold to 1. In Run 3 and Run 4 we use the models of Run 2 and Run 1 respectively, but with different threshold values per query. Each threshold is selected according to the evaluation results of the methodology in the corresponding development queries. For queries Test-9 and Test-10 where there are no analogous development queries, we used the maximum threshold from the other queries. In Runs 1 to 4 we perform classification per event, that is, we aggregate all images of an event and then perform the classification. In Run 5 on the other hand we perform classification per item and then perform the aggregation by majority vote. Also, in Run 5, the same approach in language models and threshold values as in Run 3 has been followed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">RESULTS AND DISCUSSION 4.1 Challenge 1</head><p>Table <ref type="table" target="#tab_0">1</ref> shows the scores we achieved in Challenge 1. The main thing to note is that Runs 3, 4 and 5 that use the mod-ified classification threshold show a very clear improvement over Runs 1 and 2 that do not. Moreover, it appears that appropriately expanding the candidate neighbours (Run 4 over Run 3) can also provide a significant improvement. Additionally, there is some further improvement in Run 5, that does not use blocking, over Run 4, but the improvement is very small. All in all, it can be said that strong blocking is useful in order to make the application of the method more scalable, but can lead to somewhat decreased performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Challenge 2</head><p>Table <ref type="table" target="#tab_0">1</ref> shows the average scores that we achieved over all 10 queries of Challenge 2. We note that Run 3 and Run 4 give the best average scores meaning that the selected threshold has a significant influence in the accuracy of the classification results. Test queries perform better when a calibration of the threshold value comes first. The classification of an event by handling photos in cluster uniformly performs better than having an individual classification result per photo. It should also be mentioned that considering only queries that include location criteria, the performance is significantly higher. In particular, for those queries, in Run 4 we achieve an F-score of 0.6331.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Scores achieved in the two Challenges</figDesc><table><row><cell></cell><cell></cell><cell>Challenge 1</cell><cell></cell><cell cols="3">Challenge 2 Average scores</cell><cell></cell><cell></cell><cell></cell><cell cols="4">Challenge 2, F1 per query</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Run</cell><cell>F1</cell><cell>NMI</cell><cell>Div.</cell><cell cols="2">Recall Precision</cell><cell>F1</cell><cell>1</cell><cell>2</cell><cell>3</cell><cell>4</cell><cell>5</cell><cell>6</cell><cell>7</cell><cell>8</cell><cell>9</cell><cell>10</cell></row><row><cell>1</cell><cell>0.4514</cell><cell>0.7594</cell><cell>0.4498</cell><cell>0.6101</cell><cell>0.3458</cell><cell>0.3431</cell><cell>0.6207</cell><cell>0.6588</cell><cell>0.2137</cell><cell>0.2694</cell><cell>0.8193</cell><cell>0.1524</cell><cell>0.4578</cell><cell>0.0868</cell><cell>0.1375</cell><cell>0.0145</cell></row><row><cell>2</cell><cell>0.4515</cell><cell>0.7592</cell><cell>0.4498</cell><cell>0.7505</cell><cell>0.2669</cell><cell>0.2723</cell><cell cols="2">0.6505 0.6744</cell><cell>0.0338</cell><cell>0.2671</cell><cell>0.5965</cell><cell>0.1214</cell><cell>0.2774</cell><cell>0.0141</cell><cell>0.0748</cell><cell>0.0126</cell></row><row><cell>3</cell><cell>0.8312</cell><cell>0.9627</cell><cell>0.8304</cell><cell>0.5556</cell><cell>0.4120</cell><cell>0.4043</cell><cell cols="2">0.6505 0.6744</cell><cell>0.0338</cell><cell cols="2">0.4568 0.9444</cell><cell>0.2143</cell><cell>0.4211</cell><cell>0.4902</cell><cell>0.1311</cell><cell>0.0266</cell></row><row><cell>4</cell><cell>0.9133</cell><cell>0.9808</cell><cell>0.9124</cell><cell>0.3915</cell><cell>0.7080</cell><cell>0.4604</cell><cell>0.6207</cell><cell>0.6588</cell><cell>0.4828</cell><cell>0.2500</cell><cell>0.8947</cell><cell cols="2">0.3529 0.6383</cell><cell>0.4324</cell><cell cols="2">0.2189 0.0543</cell></row><row><cell>5</cell><cell cols="3">0.9161 0.9818 0.9152</cell><cell>0.3798</cell><cell>0.3569</cell><cell>0.2806</cell><cell>0.5828</cell><cell>0.5195</cell><cell>0.0406</cell><cell>0.3136</cell><cell>0.9444</cell><cell>0.1405</cell><cell>0.1538</cell><cell>0.0000</cell><cell>0.0874</cell><cell>0.0229</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">ACKNOWLEDGMENTS</head><p>The work was supported by the European Commission under contract FP7-287975 SocialSensor.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Speech and Language Processing</title>
		<author>
			<persName><forename type="first">D</forename><surname>Jurafsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Martin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000</date>
			<publisher>Prentice Hall PTR</publisher>
			<pubPlace>Upper Saddle River, NJ, USA</pubPlace>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Social event detection using multimodal clustering and integrating supervisory signals</title>
		<author>
			<persName><forename type="first">G</forename><surname>Petkos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kompatsiaris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of ICMR</title>
				<meeting>of ICMR</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Social event detection at MediaEval 2014: Challenges, datasets, and evaluation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Petkos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Mezaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kompatsiaris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the MediaEval 2014 Multimedia Benchmark Workshop</title>
				<meeting>the MediaEval 2014 Multimedia Benchmark Workshop</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">CEA list&apos;s participation at MediaEval 2013 Placing Task</title>
		<author>
			<persName><forename type="first">A</forename><surname>Popescu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop</title>
				<meeting>the MediaEval 2013 Multimedia Benchmark Workshop</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Event-based classification of social media streams</title>
		<author>
			<persName><forename type="first">T</forename><surname>Reuter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cimiano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of ICMR</title>
				<meeting>ICMR</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">CERTH @ MediaEval 2013 Social Event Detection Task</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schinas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mantziou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Petkos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kompatsiaris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop</title>
				<meeting>the MediaEval 2013 Multimedia Benchmark Workshop</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Overfeat: Integrated recognition, localization and detection using convolutional networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Sermanet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Eigen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mathieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<idno>CoRR, abs/1312.6229</idno>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">An empirical study on the combination of SURF features with VLAD vectors for image search</title>
		<author>
			<persName><forename type="first">E</forename><surname>Spyromitros-Xioufis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kompatsiaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vlahavas</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>WIAMIS</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
