<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Interaction Design for the Exchange of Media Organized in Terms of Complex Events</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Anthony</forename><surname>Jameson</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">DFKI</orgName>
								<orgName type="institution">German Research Institute for Artificial Intelligence Saarbrücken</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sven</forename><surname>Buschbeck</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">DFKI</orgName>
								<orgName type="institution">German Research Institute for Artificial Intelligence Saarbrücken</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Interaction Design for the Exchange of Media Organized in Terms of Complex Events</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">0DE422A2175049331C258AD267DADCF0</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T03:35+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Even the most sophisticated automatic recognition of events must often be paired with an appropriate design of the users' interaction with those events. This paper presents three presumably typical use cases and associated interaction design proposals, which illustrate (a) how untrained users can benefit from the organization of media in terms of complex events; (b) how they can have their own media categorized in this way without having to invest much effort; and (c) how they can even create complex event instances with novel structures, without having to think explicitly about event structures.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>As will be shown by many of the papers that will be presented at the EVENTS 2010 workshop, the automatic identification and processing of events raises many technical challenges. But even before solutions to these problems have been found, we have to consider exactly how people might interact with systems that make use of representations of events. Having a clear idea of use cases, scenarios, and interaction designs can help us to see which technical problems are most important and what requirements need to be met.</p><p>This workshop paper considers how the recognition and representation of events can enhance interaction in a particular type of system: a media marketplace in which professional and amateur users contribute and exchange various types of media, most typically photos and videos (but also other types, such as audio files and text documents). One underlying idea is that it is often helpful for such media to be indexed and organized in terms of events that they depict or describe, in addition to more familiar indexing on the basis of time, location, tags, and named entities (such as people).</p><p>More specifically, we consider how interaction in such a marketplace can be enhanced if not only atomic events but also complex events are represented: Such an event may extend over a considerable period of time and consist of subevents, some of which in turn may be complex events. A simple example of a complex event is a soccer tournament, which comprises two or more rounds and a number of games, each of which can in turn be viewed as a complex event. We will present several scenarios and interaction designs that should help to stimulate thought on the following questions:</p><p>1. How could users benefit from the representation in the system of complex events, as opposed to having only simple events represented? 2. How can a user and a system collaborate to build up and maintain a representation of complex events, without any requirement for users to invest more than a minimal amount of effort?</p><p>This work is being done in the context of the integrating project GLOCAL. <ref type="foot" target="#foot_0">1</ref>2 Why Do We Need Complex Events?</p><p>Suppose you are an (amateur or professional) photographer or journalist who wants to share, buy, or sell media about the first half of the final game of the 2008 European Cup soccer tournament. Media concerning this event can be found in a number of media exchange sites, including Flickr. 2  Citizenside.com 3 is an example of a site that specifically supports selling of the media by amateur photographers to professional organizations, such as news agencies. Although this site organizes and indexes media in quite sophisticated ways, you would run into difficulty if you wanted to think in terms of parts of particular tournaments: The site does not organize media in terms of complex events like tournaments.</p><p>In the Sport Photo Gallery site, 4 which is dedicated to sports photos (Figure <ref type="figure" target="#fig_0">1</ref>), you can find the "Event" Euro 2008, but the media about it are indexed only in terms of players and teams, not parts of the tournament.</p><p>It may help to look at this absence of complex events in terms of an analogy: The way in which photos and videos can be embedded in a Google Map-say, of Athensshows that it is feasible and useful to organize media in terms of a large, coherent structure-in this case, the map of a city. But suppose that some of these media concern events at a conference-for example, a talk in a session of EVENTS 2010, which is in turn a subevent of SETN 2010. Google Maps can show the conference building, but it has no way of representing the additional dimension: the structure of the "conference event".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Use Case A: Navigating Via Event Structures</head><p>Suppose now that we have a media marketplace that includes:</p><p>structures for complex events; media attached to particular events. (We will discuss in below how the structures and the media will get into the system.)</p><p>Then a user can:</p><p>-1. . . . find a complex event with some combination of keyword search, use of a map and a calendar, and/or providing an example medium about that event; Although finding an optimal interaction design for this sort of event search is an interesting challenge, it is not very difficult to find an acceptable solution, so we do not provide any concrete examples in this paper. -2. . . . navigate down the hierarchical structure of the complex event to find the part that they are interested in. One way of allowing this sort of navigation is to visualize the complex event as a tree structure in which each node represents an event or a subevent. In the hypothetical screen Figure <ref type="figure">2</ref>, the user is focusing on the node for the subevent "first half of the final game", and the media associated with that subevent are shown on the right-hand side of the screen. Nodes representing higher-level events can also have media associated with them, for example a video that covers the entire game. <ref type="foot" target="#foot_2">5</ref>4 Use Case B: Inserting New Media Into an Event Structure</p><p>Even if we grant that users could benefit from this type of organization, the question arises of how media are going to get organized in this way. Realistically speaking, we cannot expect most users to spend a lot of time carefully creating complex event structures and assigning media to particular parts of these structures. So on the one hand, we need system-side processing that can handle a lot of the work of creating and populating complex event structures. On the other hand, since we cannot assume that a Fig. <ref type="figure">2</ref>. Proposed visualization of the structure of a complex event in such a way that it can be used for browsing for media associated with subevents.</p><p>fully automatic solution will be satisfactory, we have to design the user interaction in such a way that users can help the system out without investing much effort.</p><p>In this use case, we consider how users might insert media into an existing complex event structure. (The problem of creating such a structure in the first place will be considered below.) Suppose, concretely, that a photographer has created photos and videos of the Euro 2008 final and would like to add them to the Glocal site (e.g., to sell them or to share them with friends).</p><p>In Figure <ref type="figure">3</ref>, she opens up a new node "New Media" under the "Final" event and uploads the media to the space on the right (which serves as a sort of inbox).</p><p>The user could in principle specify by hand whether each medium belongs to the first half, the second half, or the whole game (as with a video that includes highlights form both halves). But the system should be able to do this work largely automatically. Essentially, it can compare the space and time coordinates of the new media-and the low-level properties of their images-with those of the already categorized media.</p><p>In Figure <ref type="figure" target="#fig_1">4</ref>, the left-hand side of the screenshot shows the system's tentative sorting of the images. The small blue and white icons indicate the system's confidence level: the more blue, the higher the confidence.</p><p>The right-hand side of the screenshot shows why it can be important to leave the last word to the user: The user has now deleted two of the low-confidence images (which she now recognizes as being largely irrelevant) and accepted the system's classification of the other images. This example illustrates that, if the user can count on a reasonable amount of intelligence on the part of the system, the user can save some of her own time, even if the system's performance is imperfect. With a bit of effort, the user could have recognized by herself that the photos of the team lining up before the game and Fig. <ref type="figure">3</ref>. Hypothetical screenshot of a situation in which a user is preparing to insert a number of new media into the structure for a complex event.</p><p>of the young lady in the stands do not really belong in the same category as the other photos and videos. But if she knows that the system will make it easy for her to remove any superfluous photos, she doesn't have to be so selective when offering them in the first place.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Use Case C: Creating a New Complex Event</head><p>But what if the user's new media concern a complex event that is not already represented in the system-maybe because it is of only local interest? Specifically, assume that a mother has taken photos and videos of her 14-year-old daughter's local soccer tournament. The user will have to create a new complex event instance with an appropriate structure. So in principle, she needs either to find an existing event structure that she can instantiate or create a (partially) new structure that is suitable for describing her event.</p><p>The main challenge lies in the fact that most users won't be willing or able to reason in terms of event structures.</p><p>The approach that we propose is to support a "copy, paste, and modify" style of event creation. A familiar-sounding example of this general approach is an author who creates a properly formated submission to the SETN 2010 conference by taking a Word document with a submission to the SETN 2008 conference:</p><p>-If the structure of the author's new submission is exactly parallel to the structure of the old submission, all the author has to do is replace the original content with his own content. He may not have to think explicitly about the structure at all. -Even if the structure of the old document is not quite right, the author can adjust it in an ad hoc way in the new document, without having to think in general terms about document structures. For example, he might add an appendix using the same format as for one of the normal sections of the paper. -An intelligent system could support this type of activity by comparing the user's new document with other SETN 2008 (or similar) papers and perhaps suggesting improvements in the structure (e.g., a slightly different way of formatting a section that has the title "Appendix" and comes at the end of the paper).</p><p>In Figure <ref type="figure">5</ref>, we assume that the user who wants to add media of her daughter's soccer tournament has already seen the event structure for Euro 2008 and has therefore decided to copy it as a starting point for the new tournament. She has recognized the need to simplify the structure somewhat and has renamed a couple of the subevents. For Fig. <ref type="figure">5</ref>. Illustration of a situation in which a user has (a) created a new complex event instance, using an existing event instance as a starting point; and (b) inserted a small number of media into the new structure so as to enable the system to insert the other new media.</p><p>example, the youth soccer tournament does not have a distinction between a "Group Stage" and a "Knockout Stage"; it begins directly with the quarterfinals.</p><p>The figure shows the state of the system after the user has (as in the previous use case) uploaded her "new media", which concern various games in the tournament, and assigned one medium to each leaf node in the hierarchy. Note that it is necessary for the user to do this initial work of placing some media in the appropriate places, since in this situation the system initially does not know any details about the subevents represented by the nodes and can therefore not perform an initial tentative categorization of new media, as it did in the previous use case.</p><p>The system now has some information about the times and places of the games, about the colors of the teams' uniforms in each game, etc. Given this information, the system can guess at the classification of the remaining media, as before (the confidence levels are not shown in the figure).</p><p>But it is unlikely that all of the media will fit naturally into the structure that the user has just created, given that this structure was simply created ad hoc on the basis of a structure for another complex event. We must assume that there may be media that call for some adaptation of the event structure.</p><p>In our example, as shown in Figure <ref type="figure">6</ref>, the system notes that the last two photos don't seem to fit into any subevent. The system might conceivably ask the user to extend the event structure to create a slot for them, but most users would find this operation difficult.</p><p>Fig. <ref type="figure">6</ref>. Illustration of how the system might suggest an improvement on the users structuring of the complex event, making use of existing representations of similar events.</p><p>So instead, the system examines the structures of other complex events (in this case: soccer tournaments) that have been created and used in the past. It notices that some of these events have included a "Celebration" event right after the end of the final game.</p><p>So it tentatively introduces this event node, putting the questionable media under it and offering an explanation of why the new subevent seems reasonable.</p><p>If the user doesn't like the suggestion, she can ask the system to suggest other subevents in a similar way (or she can just delete the photos, if she can see that they are irrelevant).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Related Work</head><p>A great deal of research on support for photo annotation-mostly not involving indexing in terms of events-has yielded many ideas about effective combinations of backend processing and interaction design (see, e.g., <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b0">[1]</ref>, for individual contributions and <ref type="bibr" target="#b2">[3]</ref> for a brief synthetic overview). Some of the work in this area also refers to indexing in terms of events. Some research (e.g., that of <ref type="bibr" target="#b1">[2]</ref>) focuses on the technical aspects of event clustering. <ref type="bibr" target="#b3">[4]</ref> likewise explore event clustering somewhat similar to the type of clustering assumed in the scenarios in this paper, also providing evidence for the viability of the sort of collaboration between user and system that is proposed here.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Conclusions and Next Steps</head><p>These scenarios and hypothetical examples illustrate how it may be possible and natural for untrained users to (a) benefit from an organization of media in terms of complex event structures and even (b) to create new event structures themselves, as a natural by-product of organizing their own media.</p><p>We are currently working on variants of these scenarios, which will then be presented to typical potential users, whose responses will presumably suggest desirable changes. The subsequent step will be the implementation of mockups that allow the interaction design to be tested.</p><p>These scenarios do make some strong assumptions about the capabilities of GLO-CAL's backend processing, which is being developed in parallel in other parts of the GLOCAL project. Understanding of how the interaction can work helps to guide the development of the backend processing, and vice versa.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Partial screenshot from the photo exchange site Sport Photo Gallery (http://www.sportphotogallery.com/). Though the site offers many photos of the Euro 2008 tournament, it is not possible to navigate among them in terms of the structure of the tournament.</figDesc><graphic coords="2,134.77,115.84,345.84,108.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Illustration of how GLOCAL can propose a classification of a user's new media (left) and how the user can second-guess the system (right).</figDesc><graphic coords="6,134.77,115.83,345.82,274.86" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="4,134.77,115.84,345.83,201.31" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="5,134.77,115.84,345.82,251.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,134.77,115.84,345.83,226.28" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Since a special session of the EVENTS</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2010" xml:id="foot_1">workshop is being devoted to this project, we assume that the workshop proceedings will contain an introductory overview of the project; therefore, we do not include such an overview in this submission. If necessary, we can add such an overview in the final version of this paper. 2 http://www.flickr.com/ 3 http://www.citizenside.com/en/sell-share-photos-videos.html 4 http://www.sportphotogallery.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2">The visualizations this paper were created with the MindManager software; they therefore do not reflect the appearance of the interfaces that will ultimately appear in the GLOCAL system.</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The research described in this position paper is being conducted in the context of the 7th Framework EU Integrating Project GLOCAL: Event-based Retrieval of Networked Media (http://www.glocal-project.eu/) under grant agreement 248984.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Toward content-aware multimodal tagging of personal photo collections</title>
		<author>
			<persName><forename type="first">P</forename><surname>Barthelmess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mcgee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth International Conference on Multimodal Interfaces</title>
				<meeting>the Ninth International Conference on Multimodal Interfaces</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="122" to="125" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Temporal event clustering for digital photo collections</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Foote</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Girgensohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wilcox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Multimedia Computing, Communications and Applications</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="269" to="288" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Bridging the motivation gap for individual annotators: What can we learn from photo annotation systems?</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jameson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First Workshop on Incentives for the Semantic Web at the 2008 International Semantic Web Conference</title>
				<meeting>the First Workshop on Incentives for the Semantic Web at the 2008 International Semantic Web Conference<address><addrLine>Karlsruhe, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Semi-automatic photo annotation strategies using event based clustering and clothing based person recognition</title>
		<author>
			<persName><forename type="first">B</forename><surname>Suh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">B</forename><surname>Bederson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="524" to="544" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Image annotation with Photocopain</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Tuffield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dupplaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chakravarthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Brewster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Gibbins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>O'hara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sleeman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shadbolt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wilks</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First International Workshop on Semantic Web Annotations for Multimedia, held at the World Wide Web Conference</title>
				<meeting>the First International Workshop on Semantic Web Annotations for Multimedia, held at the World Wide Web Conference</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
