<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="it">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The CREENDER Tool for Creating Multimodal Datasets of Images and Comments</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessio</forename><surname>Palmero</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Fondazione Bruno Kessler Trento</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stefano</forename><surname>Menini</surname></persName>
							<email>menini@fbk.eu</email>
							<affiliation key="aff0">
								<orgName type="institution">Fondazione Bruno Kessler Trento</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sara</forename><surname>Tonelli</surname></persName>
							<email>satonelli@fbk.eu</email>
							<affiliation key="aff1">
								<orgName type="institution">Fondazione Bruno Kessler Trento</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The CREENDER Tool for Creating Multimodal Datasets of Images and Comments</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9ABB2140001D057546837741FFB8E859</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T15:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>English. While text-only datasets are widely produced and used for research purposes, limitations set by image-based social media platforms like Instagram make it difficult for researchers to experiment with multimodal data. We therefore developed CREENDER, an annotation tool to create multimodal datasets with images associated with semantic tags and comments, which we make freely available under Apache 2.0 license. The software has been extensively tested with school classes, allowing us to improve the tool and add useful features not planned in the first development phase. 1</p><p>Italiano. Mentre i dataset testuali sono ampiamenti creati e usati per scopi di ricerca, le limitazioni imposte dai social media basati sulle immagini (come Instagram) rendono difficile per i ricercatori sperimentare con dati multimodali. Abbiamo quindi sviluppato CREENDER, un tool di annotazione per la creazione di dataset multimodali in cui immagini vengono associate a etichette semantiche e commenti, e che abbiamo reso disponibile gratuitamente con la licenza Apache 2.0. Il software è stato testato in un laboratorio con alcune classi scolastiche, permettendoci di ottimizzare alcune procedure e di aggiungere feature non previste nella prima release.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="it">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>In the last years, the NLP community has started to focus on the challenges of combining vision and language technologies, proposing approaches towards multimodal data processing <ref type="bibr" target="#b0">(Belz et al., 2016;</ref><ref type="bibr">Belz et al., 2017)</ref>. This has led to an increasing need of multimodal datasets with highquality information to be used for training and evaluating the developed systems. While several datasets have been created by downloading and often adding textual annotation to real online data (see for example the Flickr dataset<ref type="foot" target="#foot_2">2</ref> ), this poses privacy and copyright issues, since downloading and using pictures posted online without the author's consent is often forbidden by social network privacy policies. Instagram terms of use, for example, explicitly forbid collecting information in an automated way without express permission from the platform. <ref type="foot" target="#foot_3">3</ref>In order to address this issue, we present CREENDER, a novel annotation tool to create multimodal datasets of images and comments. With this tool it is possible to simulate a scenario where different users access the platform and are displayed different pictures, having the possibility to leave a comment and associate a semantic tag to the image. The same pictures can be shown to different users, allowing a comparison of their comments and online behaviour.</p><p>CREENDER can be used in contexts where simulated scenarios are the only solution to collect datasets of interest. One typical example, which we detail in Section 4, is the analysis of the online behaviour of teenagers and young adults, a task that poses relevant privacy issues since underage users are targeted. Giving the possibility to comment images in an Instagram-like setting without giving any personal information to register is indeed of paramount importance, and can be easily achieved with the tool presented in this paper.</p><p>Given its flexibility, CREENDER can however be used for any task where images need to be tagged and/or commented, and multiple annotations of the same image should be preferably collected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>Several tools have been developed to annotate images with different types of information. Most of them are designed to be run only on a desktop computer and are meant to select parts of the picture to assign a semantic tag or a description, so that the resulting corpora can be used to train or evaluate image recognition or captioning software. In this scenario, users often need to be trained to use the annotation tool, which requires some time that is usually not available in specific settings like schools <ref type="bibr" target="#b11">(Russell et al., 2008)</ref>. Other tools for image annotation or captioning are web-based, like CREENDER, but the software is not available for download and must be used as a service. This paradigm can lead to privacy issues, as the data are not stored locally or on an owned server <ref type="bibr" target="#b2">(Chapman et al., 2012)</ref>. This could be problematic when the pictures to be annotated are copyright-protected or when users involved in the data collection do not want/cannot create an account with personal information. Finally, some software is not distributed open source, and could suddenly become unavailable or not usable when not maintained any more <ref type="bibr" target="#b4">(Halaschek-Wiener et al., 2005;</ref><ref type="bibr" target="#b5">Hughes et al., 2018)</ref>.</p><p>Regarding the datasets, Mogadala et al. ( <ref type="formula">2019</ref>) focus on prominent tasks that integrate language and vision by discussing their problem formulations, methods, existing datasets, and evaluation measures, comparing the results obtained with different state-of-the-art methods. Ethical and legal issues on the use of pictures and texts taken from social networks are also relevant, as discussed in <ref type="bibr" target="#b6">(Lyons, 2020;</ref><ref type="bibr" target="#b10">Prabhu and Birhane, 2020;</ref><ref type="bibr" target="#b3">Fiesler and Proferes, 2018)</ref>. Our tool has been developed to address specifically also this kind of issues, preserving the privacy of users and avoiding the collection of real data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Annotation Tool</head><p>The CREENDER tool can be accessed both via browser and mobile phone, so that users can use it even if no computer connected to Internet is available. The web interface is multi-language, since English, French and Italian are already included, while other language files can be added as needed. The interface language can be assigned at user level, meaning that the interface for users on the same instance can be configured in different languages.</p><p>Once the tool is installed on a server, a super user is created, who can access the administration interface where the projects are managed with the password chosen during installation (see Figure <ref type="figure">2</ref>).</p><p>For each project, on the configuration side, a set of photos (or a set of external links to images on the web) needs to be given to the tool. Then, one can set the number of users and the number of annotations that are required for each photo. Finally, the system assigns the photos to the users and creates the login information for them. Social login is also supported (only Google for now), so that there is no need to spread users and password: the administrator chooses a five-digit code and gives it to every annotator, that can then log in using the code and his/her social account.</p><p>Given a picture, the system can be set to perform three actions in sequence or in isolation, as needed by the task: i) the picture can be skipped by the user, so that no annotation is stored and the next one is displayed; ii) the user can insert free text associated to the image. This can be used to write a caption, comment the picture, list the contained objects, etc. iii) one or more pre-defined categories can be assigned to the picture. Categories can range from specific ones related to the portrayed objects (e.g. male, female, animals, etc.) to more abstract ones, like for example the emotions provoked by looking at the picture.</p><p>In the configuration screen, the administrator can edit the prompted questions and the possible answers, so that the tool can be used for a variety of different tasks.</p><p>Using the administration web interface, it is also possible to monitor the task with information about the number of annotations that each user has performed. This enables to check whether some users experience difficulties in the annotation, or if some annotators are anomalously fast (for example by skipping too many images). Once the annotation session is closed, the administrator can download the resulting corpus containing the images and the associated information. The export is available in three formats: SQL database, CSV, and JSON. The CREENDER tool was used to collect abusive comments associated to images, simulating a setting like Instagram in which pictures and text together build an interaction which may become offensive. The data collection was carried out in several classes of Italian teenagers aged between 15 and 18, in the framework of a collaboration with schools aimed at increasing awareness on social media and cyberbullying phenomena <ref type="bibr" target="#b7">(Menini et al., 2019)</ref>. The data collection was embedded in a larger process that required two to three meetings with each class, one per week, involving every time two social scientists, two computational linguists and at least two teachers. During these meetings several activities were carried out with students, including simulating a WhatsApp conversation around a given plot as described in <ref type="bibr" target="#b12">(Sprugnoli et al., 2018)</ref>, commenting on existing social media posts, and annotating images as described in this paper.</p><p>Overall, 95 students were involved in the anno-tation. The sessions were organised so that different school classes annotated the same set of images, in order to collect multiple annotations on the same pictures. The pictures were retrieved from online sources and then manually checked by the researchers involved in the study to remove pornographic content. In the preparatory phase, the filtered pictures were uploaded in the CREEN-DER image folder. Then, a login and password were created for each student to be involved in the data collection and printed on paper, so that they could be given to each student before an annotation session without the possibility to associate login information with the students' identity. CREENDER was configured to first take a random picture from the image folder, and display it to the user with a prompt asking "If you saw this picture on Instagram, would you make fun of the user who posted it?". If the user selects "No", then the system picks another image randomly and the same question is asked. If the user clicks on "Yes", a second screen opens where the user is asked to specify the reason why the image would trigger Figure <ref type="figure">2</ref>: The administration interface to define the number of users and the images per user such reaction by selecting one of the following categories: "Body", "Clothing", "Pose", "Facial expression", "Location", "Activity" and "Other". Two screenshots of the interface are displayed in Figure <ref type="figure" target="#fig_0">1</ref>. The user should also write the textual comment s/he would post below the picture. After that, the next picture is displayed, and so on. A screenshot of the tool configured for this specific task is displayed in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>At the end of the activities with schools, all collected data were exported. The final corpus includes almost 17,912 images, 1,018 of which have at least one associated comment, as well as a trigcategory (e.g. facial expression, pose) and the category of the subject/s (female, male, mixed or nobody). The number of annotations for each picture may vary between 1 to 4. A more detailed description of the corpus is reported in <ref type="bibr" target="#b8">(Menini et al., 2021)</ref>.</p><p>The use of CREENDER allowed a seamless and very fast data collection, without the need to send images to each student, to exchange or merge files and to install specific applications. On the other hand, the data collection with students, who used the online platform in classes while researchers were physically present and could check the flow of the interaction, was useful to improve the tool. Some bug fixes and small improvements were indeed implemented after the first sessions. For example, a small delay (2 seconds) was added after the image is displayed to the user and before the Yes/No buttons appear, so that users are more likely to look at the picture before deciding to skip it or not.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Release</head><p>The software is distributed as an open source package<ref type="foot" target="#foot_4">4</ref> and is released under the Apache license (version 2.0). The API (backend) is written in php and relies on a MySQL database. The web interface (frontend) is developed using the HTML/CSS/JS paradigm using the modern Bootstrap and VueJS frameworks.</p><p>The interface is responsive, so that one can use it from any device that can open web pages (desktop computers, smartphones, tablets).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusions</head><p>In this work we present a methodology and a tool, CREENDER, to create multimodal datasets. In this framework, participants in online annotation sessions can write comments to images, assign pre-defined categories or simply skipping an image. The tool is freely available with an interface in three languages, and allows setting up easily annotation sessions with multiple users.</p><p>CREENDER has been extensively tested during activities with schools around the topic of cyberbullying, involving 95 Italian high-school students. The tool is particularly suitable for this kind of settings, where privacy issues are of paramount importance and the involvement of un-derage people requires that personal information is not shared.</p><p>In the future, we plan to continue the annotation of images related to cyberbullying, creating and comparing subsets of pictures related to different topics (e.g. religious symbols, political parties, football teams). From an implementation point of view, we will extend the analytics panel, adding for example scripts for computing inter-annotator agreement.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: CREENDER interface configured for the collection of potentially offensive comments</figDesc><graphic coords="3,142.87,233.54,311.80,169.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="4,142.87,62.81,311.82,211.12" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">"Copyright c</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2020" xml:id="foot_1">for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)."</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">https://yahooresearch. tumblr.com/post/89783581601/ one-hundred-million-creative-commons-flickr-images</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">See, for example, https://help.instagram. com/581066165581870.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_4">https://github.com/dhfbk/creender</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Part of this work has been funded by the KID ACTIONS REC-AG project (n. 101005518) on "Kick-off preventIng and responDing to children and AdolesCenT cyberbullyIng through innovative mOnitoring and educatioNal technologieS". In addition, the authors want to thank all the students and teachers who participated in the experimentation.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Anya</forename><surname>Belz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erkut</forename><surname>Erdem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krystian</forename><surname>Mikolajczyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Katerina</forename><surname>Pastra</surname></persName>
		</author>
		<title level="m">Proceedings of the 5th Workshop on Vision and Language</title>
				<meeting>the 5th Workshop on Vision and Language<address><addrLine>Berlin, Germany</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2016-08">2016. August</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m">Proceedings of the Sixth Workshop on Vision and Language</title>
				<editor>
			<persName><forename type="first">Anya</forename><surname>Belz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Erkut</forename><surname>Erdem</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Katerina</forename><surname>Pastra</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Krystian</forename><surname>Mikolajczyk</surname></persName>
		</editor>
		<meeting>the Sixth Workshop on Vision and Language<address><addrLine>Valencia, Spain</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2017-04">2017. April</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Annio: a web-based tool for annotating medical images with ontologies</title>
		<author>
			<persName><forename type="first">Mona</forename><surname>Brian E Chapman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Claudiu</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Farcas</surname></persName>
		</author>
		<author>
			<persName><surname>Reynolds</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2012">2012. 2012</date>
			<biblScope unit="page" from="147" to="147" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">participant&quot; perceptions of twitter research ethics</title>
		<author>
			<persName><forename type="first">Casey</forename><surname>Fiesler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nicholas</forename><surname>Proferes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Social Media + Society</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">2056305118763366</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Photostuff-an image annotation tool for the semantic web</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Halaschek-Wiener</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jennifer</forename><surname>Golbeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Schain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Grove</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bijan</forename><surname>Parsia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jim</forename><surname>Hendler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th international semantic web conference</title>
				<meeting>the 4th international semantic web conference</meeting>
		<imprint>
			<publisher>Citeseer</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="6" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Quanti.us: a tool for rapid, flexible, crowd-based annotation of images</title>
		<author>
			<persName><forename type="first">Alex</forename><forename type="middle">J</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><surname>Joseph D Mornin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sujoy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lauren</forename><forename type="middle">E</forename><surname>Biswas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">P</forename><surname>Beck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Arjun</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simone</forename><surname>Raj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zev</forename><forename type="middle">J</forename><surname>Bianco</surname></persName>
		</author>
		<author>
			<persName><surname>Gartner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature methods</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="587" to="590" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Excavating&quot; excavating ai</title>
		<author>
			<persName><forename type="first">J</forename><surname>Michael</surname></persName>
		</author>
		<author>
			<persName><surname>Lyons</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2009.01215</idno>
	</analytic>
	<monogr>
		<title level="m">The elephant in the gallery</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A system to monitor cyberbullying based on message classification and social network analysis</title>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Menini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Moretti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michele</forename><surname>Corazza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elena</forename><surname>Cabrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Tonelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Serena</forename><surname>Villata</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Abusive Language Online</title>
				<meeting>the Third Workshop on Abusive Language Online</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="105" to="110" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A multimodal dataset of images and text to study abusive language</title>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Menini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alessio</forename><surname>Palmero Aprosio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Tonelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">7th Italian Conference on Computational Linguistics</title>
				<meeting><address><addrLine>CLiC-it</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2021. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">Aditya</forename><surname>Mogadala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marimuthu</forename><surname>Kalimuthu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dietrich</forename><surname>Klakow</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1907.09358</idno>
		<title level="m">Trends in integration of vision and language research: A survey of tasks, datasets, and methods</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Large image datasets: A pyrrhic win for computer vision?</title>
		<author>
			<persName><forename type="first">Uday</forename><surname>Vinay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Abeba</forename><surname>Prabhu</surname></persName>
		</author>
		<author>
			<persName><surname>Birhane</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">LabelMe: a database and web-based tool for image annotation</title>
		<author>
			<persName><forename type="first">Antonio</forename><surname>Bryan C Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kevin</forename><forename type="middle">P</forename><surname>Torralba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">William</forename><forename type="middle">T</forename><surname>Murphy</surname></persName>
		</author>
		<author>
			<persName><surname>Freeman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of computer vision</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="issue">1-3</biblScope>
			<biblScope unit="page" from="157" to="173" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Creating a WhatsApp Dataset to Study Pre-teen Cyberbullying</title>
		<author>
			<persName><forename type="first">Rachele</forename><surname>Sprugnoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Menini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Tonelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Filippo</forename><surname>Oncini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Enrico</forename><surname>Piras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)</title>
				<meeting>the 2nd Workshop on Abusive Language Online (ALW2)</meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="51" to="59" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
