<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">GaChat:A chat system that displays online retrieval information in dialogue text</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Satoshi</forename><surname>Horiguchi</surname></persName>
							<email>horiguchi@mos.ics.keio.ac.jp</email>
							<affiliation key="aff0">
								<orgName type="department">Graduate School of Science and Technology</orgName>
								<orgName type="institution">Keio University</orgName>
								<address>
									<addrLine>3-14-l Hiyoshi, Kohoku-ku</addrLine>
									<postCode>223-8522</postCode>
									<settlement>Yokohama</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><roleName>Tohru</roleName><forename type="first">Akifumi</forename><surname>Inoue</surname></persName>
							<email>akifumi@cs.teu.ac.jp</email>
							<affiliation key="aff1">
								<orgName type="department">Hoshi School of Computer and Science</orgName>
								<orgName type="institution">Tokyo University of Technology</orgName>
								<address>
									<addrLine>1404-1 Katakura</addrLine>
									<postCode>192-0982</postCode>
									<settlement>Hachioji</settlement>
									<region>Tokyo</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kenichi</forename><surname>Okada</surname></persName>
							<email>okada@ics.keio.ac.jp</email>
							<affiliation key="aff2">
								<orgName type="department">Faculty of Science and Technology</orgName>
								<orgName type="institution">Keio University</orgName>
								<address>
									<addrLine>3-14-l Hiyoshi, Kohoku-ku</addrLine>
									<postCode>223-8522</postCode>
									<settlement>Yokohama</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">GaChat:A chat system that displays online retrieval information in dialogue text</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F9F0BDC16453753108E96D370BE9CA28</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Text chat communication</term>
					<term>Instant messaging service</term>
					<term>Web based communication H.5.3 Group and Organization Interfaces: CSCW</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Text chat systems are popular and widely used by a lot of users. However, there are sometimes redundant interactions between the users because of its less awareness. In this paper, we propose a text chat system called "GaChat", which simultaneously appends related information about the dialogue text between its users. First, proper nouns are extracted from the dialogue text by morphologic analysis. Then online images and articles related to the nouns are simultaneously displayed with the dialogue text. Such settlement of the ambiguity helps users to reduce redundant interactions like searching and asking the details of the phrase. This paper describes the prototype implementation and its first evaluation experiment.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>INTRODUCTION</head><p>Text chat systems are popular and widely used as one of the easy communication tools. Such systems can smoothly and quickly exchange text messages. On the other hand, a text chat communication has less awareness than a face-to-face or a video-based communication. It is difficult to infer the vocabulary of their partner and subtle difference in nuance under the low-awareness condition. We often have to exchange a lots of redundant messages to explain a trivial matter.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>One of the approaches to cover the lack of the awareness is</head><p>Workshop on Visual Interfaces to the Social and the Semantic Web (VISSW2009), IUI2009, Feb 8 2009, Sanibel Island, Florida, USA.</p><p>Copyright is held by the author/owner(s).</p><p>to attach additional devices such as videocameras and thermometers to the system. However, a text chat system with a videocam means just a video chat system. It is not a textchat system any longer. A video chat communication requires higher mental load than a text chat communication. Those devices also complicate the simplicity of the text chat system.</p><p>We believe that a text chat system should be used for casual communications, which is easy to start, easy to keep, and easy to quit. We propose a text chat system called "GaChat". This system is developed to avoid the misunderstandings because of the low-awareness without any additional devices. This means that the system only uses a keyboard as the input device. Instead, the system displays images and comments related to the dialogue of the users explicitly.</p><p>The remainder of this paper is organized as follows. In Section 2, we discuss related work on communications support with text chat. In Section 3, we describe our chat system design. In Section 4, we discuss the our prototype system. In Section 5, we describe example of operating the prototype. In Section 6, we discuss the current limitations ofour prototype system. We conclude in Section 7 by discussing near future work that we plan to explore.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>RELATED WORK</head><p>Our system displays complementary information simultaneously with the text messages. Several studies have take similar approaches to the various situations <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3]</ref>. Lieberman proposed a system that assists users to take opportunities in our daily work by image retrieval and annotation <ref type="bibr" target="#b3">[4]</ref>.</p><p>The purpose of those researches limits it's use of the chat, and improves communication in that such limited situation. Lock-on-Chat <ref type="bibr" target="#b4">[5]</ref> is a chat system for the communication at the place of an academic conference. Users can share the snapshot of the slide among the participants and can leave comments freely on a specific part of the snapshot. This system was actually implemented at an conference place, and a lot of participants actively discussed with it. However, our goal is not for such a specific situation, but for the daily casual situation. Munemori et al <ref type="bibr" target="#b5">[6]</ref> proposed a "emoticon" chat system. In this system, a user can use "emoticon" only, no plain text messages can be used. Although this system can give universal messages regardless of the language, the contents of the message is limited.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Retrieval button</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Retrieval result</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Send button</head><p>While we are communicating with text chat systems, we often find unknown or unclear words in the messages. Most of us try to use the help of a web search engine to resolve the question. This search action is troublesome because it requires another web browser besides the chat system. Doing the search action or not is left to the user's own choice.</p><p>Windows Live Messenger <ref type="bibr" target="#b6">[7]</ref>, which is most popular chat system, has already integrated the web search function to the system. A user can search keywords entered in the message area by pushing "search" button instead of "submit" button. However, the result is returned as a URL format. A retrieval example is illustrated in figure <ref type="figure" target="#fig_0">1</ref>. To see what's in the URL, a user has to run the browser again. A user also has to do chatting and searching at the sametime. From the questionnaire to our colleagues, we found that the function of the "retrieval(search)" button was not used positively. Even the existence was not clear for them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>METHOD Outline</head><p>The outline of the proposal method is shown Figure <ref type="figure" target="#fig_2">2</ref>. A user inputs and sends a message as well as in the same manner of the normal text chat system. Then the message is sent to the GaChat server. The server excerpts the proper noun from the message, and fetches the article about the noun from wikipedia. Those additional data are automatically displayed on both chat windows.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>If the proposal method is used, the extra activity between the chat users can be suppressed. An extra activity is an extra</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>About online retrieval information</head><p>Typical word and phrase that have different understanding is the proper noun.A proper noun indicates a specific object such as a name of a person or a place. It is alternative whether one knows the object or not. If a user does not have enough knowledge about the proper nouns appeared in the exchanging message, the message might not be treated properly. We also have to pay attention to the difference of the nuance and connotations.</p><p>Proper nouns are also frequently used as search terms because it is effective to make the search more specific. If we use general nouns as search terms, the result might be enormous and ambiguous. Such information does not help the mutual understanding at text chat communication. Therefore, our system uses proper nouns only to fetch the supplemental images and articles. If there are multiple proper nouns in a chat message, we choice the last one. This is because Japanese grammar(the authors' native language) tends to place more pmphasis on the last part of a sentence.</p><p>We think about a presentation of the image information and formal textual information as a method of presenting information on the proper noun. Supplementary information should be strongly related to the object the user intended to explain. Our system fetch those contents from the site Wikipedia. For instance, when one tries to explain a school "Tokyo University of Technology" to a friend who doesn't know it, a symbolic picture and outlined information about history and faculty may be impressive and help the understanding(Figure <ref type="figure" target="#fig_4">3</ref>). Currently such well-edited contents are collected in an encyclopaedic site.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Operation</head><p>Our proposal system consists of Chat Server and Chat Client. Chat server (GaChat server) has those functions that is management of conversation, analysis of conversation text, image search and retrieval of article in encyclopedia. Chat client (Gachat client)utilizes those additional data to talk with   The outline of operation extracts the proper noun by the system when sentences are input, and retrieves the image search and the encyclopedia to the retrieval word this. URL of the image is acquired in the image search, and the title and the text in the article are acquired in the encyclopedia retrieval. As a result, the user-name, it makes remarks to the message area, and the title and the text are displayed in the image, the retrieval word, and the encyclopedia area in the image area.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IMPLEMENTATION OF GACHAT</head><p>This chapter describes chat system "GaChat" that gives retrieval information by the conversation phrase based on the proposal method automatically.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Environment</head><p>GaChat was composed of the client and the server, and implemented by Java. It is Sen <ref type="bibr" target="#b7">[8]</ref> as the morphological analy- sis machine though the proper noun was extracted from the conversation text of the client with the server) was used. Yahoo! API <ref type="bibr" target="#b8">[9]</ref> was used for the extracted phrase. 'SimpleAPI-Wikipedia API <ref type="bibr" target="#b9">[10]</ref>' was used for the article retrieval(figure <ref type="figure">4</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>GaChat server</head><p>The GaChat server is a text analysis besides the communication of the message of the normal text chat, is an image search, and retrieves the encyclopedia. The text analysis extracted the proper noun from the remark. This extracted phrase is used for retrieval of image and the encyclopedia(Figure <ref type="figure" target="#fig_6">5</ref>).</p><p>We discuss technology's detail on our system. Request URL is made by the REST form, and it inquires of image search Web service. It is necessary the phrase to URL encode by UTF-8. The image URL get from the response field of "/Re-sultSet/Result/Url". Itextract the URL of retrieved image from Those retrieval results information of the top 12 list. "The 12 list" means the number of retrieval result in first page. Because, the research company in the United States indicates that the user of 62% surfs only the first page result <ref type="bibr" target="#b10">[11]</ref>.</p><p>The encyclopedia retrieval matches to the specification of WikipediaAPI and generates request URL that adds the phrase similarly encoded to UTF-8. The output form selected XML.</p><p>It is a Perth doing XML. The title of the article get from the response field of "/results/result/title". The digest of the article get from the response field of "/results/result/body".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>GaChat client</head><p>We explains each function of the GaChat client by GUI(Figure <ref type="figure" target="#fig_6">5</ref>). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>EXPERIMENT</head><p>We discuss The operation verification of GaChat. It is confirmed that the proper noun is extracted from each a series of remark as a precondition by confirming the operation about the system of each function to mount, and the image search, the display by the words and phrases, and the retrieval and the displays of the Wikipedia article are done. To talk about the time that hangs to the retrieval and the display of two functions slightly, it is especially understood not to become it in the obstacle.</p><p>We introduce the results of the communications with GaChat.</p><p>The testee was recruited in the laboratory to which authors belonged for communications using GaChat to search for the feature, and the text communications were used to the last to do. The chat software was used usually, and testee's condition was assumed to be a thing that the blind touch of extent to which the input of the message with the keyboard did not obstruct communications was able to be done this time. It introduces one example of feature communications as follows.</p><p>The conversation is Figure <ref type="figure" target="#fig_8">6 and 7</ref>.</p><p>Horiguchi: I went to Tokyo Tower yesterday. Kodaira: Good. Have you been to Zojo-ji Temple near Tokyo Tower?</p><p>The phrase (proper noun)extracted from Horiguchi's remark is "Tokyo Tower". Picture and Wikipedia article related to Tokyo Tower is displayed. The phrase (proper noun)extracted from Kodaira's remark is "Zojo-ji Temple". Picture and Wikipedia article related to Zojo-ji Temple is displayed. Zojo-ji Temple is a Buddhist temple in the Shiba neighborhood of Minatoku in Tokyo, Japan.</p><p>In this case, when related information on the topic was synchronously presented while doing the text communications, it was able to be confirmed to be especially reactive to the image. However, this conversation example was included, and it was not able to be confirmed that article information in the encyclopedia influenced the topic by the present stage. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DISCUSSION</head><p>there is domination because the phrase (proper noun) is extracted from the content of the conversation by the automatic operation of this system, and the function to display the retrieval result doesn't request special work from the user.</p><p>Perhaps, each person might be implicitly inspecting article information in the encyclopedia by the confirmation extent when not understanding. It will be necessary to catch and to investigate in the future.</p><p>The image information might be acquired from 12 high ranks at random, and neither article information nor the content of Wikipedia might match it in mounting GaChat. a actress Ozawa pearl is displayed to retrieval "Ozawa" in the image information and politician Ichiro Ozawa is displayed in the wikipedia article(Figure <ref type="figure" target="#fig_9">8</ref>). Person's name is a problem that a famous person of this family name causes in two or more situations when extracted only by the family name. "Interest" might be able to be offered to communications by using such a phenomenon though it is necessary to correspond as trouble.</p><p>General information that expects everyone need not never go out though the match of the image information and the Wikipedia article might be important. Because the concept of GaChat is fixation of the topic by conversation text and synchronous displaying related information on the topic and union, the accuracy of the retrieval result is not so important.</p><p>The more important one is stability that always displays the image information and the Wikipedia article to the conversation. There is no problem about the part under the present situation.</p><p>We want to analyze a further feature continuing the trial evaluation in the future. Moreover, the fixed quantity evaluation is necessary to judge more in detail, and to analyze the state as communications. The evaluation standard that becomes a criteria is selected therefore, and construction is important.</p><p>Moreover, the method of displaying it is devised when on the other hand, there is an image information in the article on Wikipedia. It will be necessary to mount the function, extracted Key word " " " "Ozawa" " " "</p><p>It is a type of family name.</p><p>This Image is a actress named Ozawa.</p><p>This Wikipedia article is a politician named Ozawa. and to compare it with this proposal technique in the future. This time, the case where Wikipedia article information is referred frequently was not able to be confirmed, and it wants to be going to advance the examination in the future, and to search for the application example.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONCLUSIONS</head><p>In this thesis, We proposed GaChat. So far we have outlined the way in which our chat system is low-awareness without any additional devices. Our system aimed at the obstruction evasion and the consensus of communications. Moreover, it is thought that information displayed synchronizing with the message plays the role of the conversion of the topic when communications are smoothly done, and I want to evaluate the respect. When chatting the text, it phrases it as an approach of the action assistance of the print putting of the obstruction evasion of communication and communications together in the conversation in the text chat in this thesis. The technique for the union of specified meanings of the phrase by synchronously presenting information that concerned, and limiting it was shown. It is thought that lighthearted communications that were the advantages of the text chat were kept compared with the one to transmit past awareness information and to attempt solving. The opinion roughly friendly was received from the testee.</p><p>In the evaluation that used the prototype, it was confirmed that attention gathered in the image information conversation text and synchronous displayed, and the topic was fixed. Moreover, it was confirmed that attention turned even when there was no knowledge for the presented image information. It was shown to reduce communication the obstruction, and to assist the print putting of knowledge each other together. Moreover, it is thought that information displayed synchronizing with the message plays the role of the conversion of the topic.</p><p>It will be necessary to continue the trial evaluation, and to do a quantitative evaluation in the future. I want to improve the system from the problem etc. clarified from the evaluation experiment. When communications are smoothly done, we investigate the role of the conversion of the topic.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Retrieval function of the existing system</figDesc><graphic coords="2,73.62,54.78,199.62,203.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Overview of the system</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Location: Hachioji,Tokyo,JP Website: http://www.teu.ac.jp/ etc… The explanation about Tokyo University of Technology</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. A Image information and a formal Character information</figDesc><graphic coords="3,76.62,94.07,81.02,54.67" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 .( 1 )</head><label>41</label><figDesc>Figure 4. Operation of the system</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. Details of function</figDesc><graphic coords="3,344.73,54.80,191.48,127.67" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6. Conversation 1</figDesc><graphic coords="4,69.08,57.11,208.63,139.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 7 .</head><label>7</label><figDesc>Figure 7. Conversation 2</figDesc><graphic coords="4,332.10,57.11,208.63,139.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 8 .</head><label>8</label><figDesc>Figure 8. ExampleDisagreement of a image information and a encyclopedia information</figDesc><graphic coords="5,82.73,60.53,189.10,187.20" type="bitmap" /></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"> <ref type="bibr" target="#b0">(1)</ref><p>. We register the chat's username and the chat room.</p><p>(2). This area displays registered name and chat room.</p><p>(3). Input and send of message. (4). This area displays the conversational content. <ref type="bibr" target="#b4">(5)</ref>. This area displays the proper noun and that phrase's URL. <ref type="bibr" target="#b5">(6)</ref>. This area displays image related to the phrase. <ref type="bibr" target="#b6">(7)</ref>. This area displays Wikipedia article related to the phrase.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Information access for context-aware appliances (poster session)</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Gareth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><forename type="middle">J</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><surname>Brown</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SIGIR &apos;00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="382" to="384" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Information access in context</title>
		<author>
			<persName><forename type="first">L</forename><surname>Birnbaum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Budzik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Hammond</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="37" to="53" />
			<date type="published" when="2001-03">Mar. 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Using physical context for just-in-time information retrieval</title>
		<author>
			<persName><forename type="first">B</forename><surname>Rhodes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1011" to="1014" />
			<date type="published" when="2003-08">Aug. 2003</date>
		</imprint>
	</monogr>
	<note>Computers</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Aria: An agent for annotating and retrieving images</title>
		<author>
			<persName><forename type="first">Henry</forename><surname>Lieberman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elizabeth</forename><surname>Rozenweig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Push</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="57" to="62" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Lock-on-chat: Boosting anchored conversation and its operation at a technical conference</title>
		<author>
			<persName><forename type="first">Takeshi</forename><surname>Nishida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takeshi</forename><surname>Nishida</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">INTERACT 2005</title>
		<title level="s">Springer LNCS</title>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="970" to="973" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Development and application of pictograph chat communicator</title>
		<author>
			<persName><forename type="first">Munemori</forename><surname>Jun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miyai</forename><surname>Shunsuke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ito</forename><surname>Junko</surname></persName>
		</author>
		<idno>GN-61</idno>
	</analytic>
	<monogr>
		<title level="j">IPSJ SIG Technical Reports</title>
		<imprint>
			<biblScope unit="page">2006</biblScope>
			<date type="published" when="2006-09">Sep 2006</date>
		</imprint>
	</monogr>
	<note>Japanese</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="http://messenger.live.jp/" />
		<title level="m">WindowsLiveMessenger</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><surname>Sen</surname></persName>
		</author>
		<ptr target="http://ultimania.org/sen/" />
		<title level="m">Japanese morphological analysis system</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="http://developer.yahoo.co.jp/search/image/V1/imageSearch.html" />
		<title level="m">Image search Web service</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
	<note>Dveloper network</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title/>
		<ptr target="http://wikipedia.simpleapi.net/" />
	</analytic>
	<monogr>
		<title level="j">WikipediaAPI. SimpleAPI</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><surname>Iprospect</surname></persName>
		</author>
		<ptr target="http://www.iprospect.com/index.htm" />
		<title level="m">iprospect search engine user behavior study</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
