<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Multi-modal relevance feedback for medical image retrieval</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dimitrios</forename><surname>Markonis</surname></persName>
							<email>dimitrios.markonis@hevs.ch</email>
							<affiliation key="aff0">
								<orgName type="institution">HES-SO TechnoPole</orgName>
								<address>
									<settlement>Sierre</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roger</forename><surname>Schaer</surname></persName>
							<email>roger.schaer@hevs.ch</email>
							<affiliation key="aff1">
								<orgName type="institution">HES-SO TechnoPole</orgName>
								<address>
									<settlement>Sierre</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Henning</forename><surname>Müller</surname></persName>
							<email>henning.mueller@hevs.ch</email>
							<affiliation key="aff2">
								<orgName type="institution">HES-SO TechnoPole</orgName>
								<address>
									<settlement>Sierre</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Multi-modal relevance feedback for medical image retrieval</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">EFF30EF90F765CBD56EDBA523CD16338</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>relevance feedback</term>
					<term>content-based image retrieval</term>
					<term>medical image retrieval</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Medical image retrieval can assist physicians in finding information supporting their diagnosis. Systems that allow searching for medical images need to provide tools for quick and easy navigation and query refinement as the time for information search is often short.</p><p>Relevance feedback is a powerful tool in information retrieval. This study evaluates relevance feedback techniques with regard to the content they use. A novel relevance feedback technique that uses both text and visual information of the results is proposed.</p><p>Results show the potential of relevance feedback techniques in medical image retrieval and the superiority of the proposed algorithm over commonly used approaches.</p><p>Future steps include integrating semantics into relevance feedback techniques to benefit of the structured knowledge of ontologies and experimenting on the fusion of text and visual information.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>Searching for images is a daily task for many medical professionals, especially in image-oriented fields such as radiology. However, the huge amount of visual data in hospitals and the medical literature is not always easily accessible and physicians have generally little time for information search as they are charged with many tasks. Therefore, medical image retrieval systems need to return information adjusted to the knowledge level and expertise of the user in a quick and precise fashion. A well known technique trying to improve search results by user interaction is relevance feedback <ref type="bibr" target="#b13">[13]</ref>. Relevance feedback allows the user to mark results returned in a previous search step as relevant or irrelevant to refine the initial query. The concept behind relevance feedback is that though user may have difficulties in formulating a precise query for a specific task, they generally see quickly whether a returned result is relevant to the information need or not. This technique found use in image retrieval particularly with the emerge of content-based image retrieval (CBIR) systems <ref type="bibr" target="#b18">[18,</ref><ref type="bibr" target="#b19">19,</ref><ref type="bibr" target="#b20">20]</ref>. Following the CBIR mentality, the visual content of the marked results is used to refine the initial image query. With the result images represented as a grid of thumbnails, relevance feedback can be applied quickly to speed up the search iterations and refine results. Recent user-tests with radiologists on a medical image search system also showed that this method is intuitive and straightforward to learn <ref type="bibr">[7]</ref>.</p><p>Depending on whether the user manually provides the feedback to the system (e.g. by marking results) or the system obtains this information automatically (e.g. by log analysis) relevance feedback can be categorized as explicit or implicit. Moreover, the information obtained by relevance feedback can be used to affect the general behaviour of the system (long-term learning). In <ref type="bibr" target="#b11">[11]</ref> a market basket analysis algorithm is applied in image retrieval of long-term learning. A recent review of short-term and long-term learning relevance feedback techniques in CBIR can be found in <ref type="bibr" target="#b6">[6]</ref>. An extensive survey of relevance feedback in text-based retrieval systems is presented in <ref type="bibr" target="#b15">[15]</ref> and for CBIR in <ref type="bibr" target="#b14">[14]</ref>.</p><p>In the medical informatics field, <ref type="bibr" target="#b1">[1]</ref> applies CBIR with relevance feedback on mammography retrieval. In <ref type="bibr" target="#b12">[12]</ref>, an image retrieval framework using relevance feedback is evaluated on a dataset of 5000 medical images that uses support vector machines to compute the refined queries.</p><p>In this paper we evaluate different explicit, short-term relevance feedback techniques using visual content or text for medical image retrieval. We propose a technique that combines visual and text-based relevance feedback and show that it achieves a competitive performance to the state-ofthe-art approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">METHODS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Rocchio algorithm</head><p>One of the most well known relevance feedback techniques is Rocchio's algorithm <ref type="bibr" target="#b13">[13]</ref>. Its mathematical definition is given below:</p><formula xml:id="formula_0">qm = α qo + β 1 |Dr| d j ∈Dr dj − γ 1 |Dnr| d j ∈Dnr dj (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where qm is the modified query, qo is the original query, Dr is the set of relevant images, Dnr is the set of non-relevant images and α, β and γ are weights. Typical values for the weights are α = 1, β = 0.8 and γ = 0.2. Rocchio's algorithm is typically used in vector models and also for CBIR. Intuitively, the original query vector is moved towards the relevant vectors and away from the irrelevant ones. By giving a weight to the positive and negative parts a problem of CBIR can be avoided that when more negative than positive feedback exists that also many relevant images disappear from the results set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Late fusion</head><p>Another technique that showed potential in image retrieval <ref type="bibr" target="#b5">[5]</ref> is late fusion. Late fusion <ref type="bibr" target="#b2">[2]</ref> is used in information retrieval to combine result lists. It can be applied for fusing multiple features, multiple queries and in multi-modal techniques. The concept behind this method is to merge the result lists into a single list while boosting common occurrences using a fusion rule.</p><p>For example, the fusion rule of the score-based late fusion method CombMNZ <ref type="bibr" target="#b17">[17]</ref> is defined as:</p><formula xml:id="formula_2">S combMNZ (i) = F (i) * S combSUM (i)<label>( 2 )</label></formula><p>where F (i) is the number of times an image i is present in retrieved lists with a non-zero score, and S(i) is the score assigned to image i. CombSUM is given by</p><formula xml:id="formula_3">S combSUM (i) = N j j=1 Sj(i)<label>( 3 )</label></formula><p>where Sj(i) is the score assigned to image i in retrieved list j.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Multi-modal relevance feedback</head><p>Most of the techniques use vectors either from the text or the visual models. However, it has been shown that approaches that use both text and visual information can outperform single-modal ones in image retrieval. We propose the use of multi-modal information for relevance feedback to enhance the retrieval performance. This is, to the extend of our knowledge, the first time that such a technique is proposed in image retrieval. As late fusion is applied on result lists, it is straightforward to use for combining results from visual and text queries.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Experimental setup</head><p>For evaluating the relevance feedback techniques the following experimental setup was followed: The n search iterations are initiated with a text query in iteration 0. The relevant results from the top k results of iteration i were used in the relevance feedback formula of the iteration i + 1 for i = 0...n − 2.</p><p>The image dataset, topics and ground truth of Image-CLEF 2012 medical image retrieval task <ref type="bibr" target="#b9">[9]</ref> were used in this evaluation. The dataset contains more than 300'000 images from the medical open access literature.</p><p>The image captions were accessed by the text-based runs and indexed with the Lucene 1 text search engine. Vector space model was used along with tokenization, stopword removal, stemming and Inverse document frequency-Term frequency weighting. The Bag-of-visual-words model described in <ref type="bibr" target="#b3">[3]</ref> and the bag-of-colors model appearing in <ref type="bibr" target="#b4">[4]</ref> 1 http://lucene.apache.org/  were used for the visual modelling of the images. In multimodal runs, the fusion of the visual and text information is performed only for the text 1000 top results as in the evaluation of ImageCLEF only the top 1000 documents are taken into account in any case. Five techniques were evaluated in this study:</p><p>1. text: text-based RF using vector space model. Word stemming, tokenization and stopword removal is performed in both text and multi-modal runs.</p><p>2. visual rocchio: visual RF using Rocchio to fuse the relevant image vectors and CombMNZ fusion to fuse the original query's results with the visual ones.</p><p>3. visual lf : visual RF using late fusion (and the CombMNZ fusion rule) to fuse the relevant image results and the original query results with the visual ones.</p><p>4. mixed rocchio: multimodal RF using Rocchio to fuse the relevant image vectors and CombMNZ fusion to fuse the original query results with the relevant caption results and relevant visual results.</p><p>5. mixed lf : multimodal RF using late fusion (and the CombMNZ fusion rule) to fuse the relevant image results and the original query results with the captions' results and relevant visual results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">RESULTS</head><p>The evaluation of the five techniques was performed for k = 5, 20, 50, 100 and n = 5. Results of the mean average precision (mAP) of each technique per iteration are shown in Figures <ref type="figure" target="#fig_3">1, 2, 3, 4</ref>.</p><p>Table <ref type="table" target="#tab_0">1</ref> gives the best mAP scores of each run. The numbers in parentheses are the number of the iteration when this score was achieved. For scores that were the same in multiple iterations of the same run, the iteration closer to the first is used.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">DISCUSSION</head><p>All of the evaluated techniques improve retrieval after the initial search iteration. This demonstrates the potential of relevance feedback for refining medical image search queries.</p><p>Relevance feedback using only visual appearance models, even though improving the retrieval performance after the first iteration, performed worse than the text-based runs in most cases. Visual features still suffer from the semantic gap between the expressiveness of visual features and our human interpretation. Still, this shows their usefulness in image datasets where no or little text meta-data are available. Moreover, when combined with the text-information in the proposed method, they improve the text-only baseline.</p><p>The proposed multi-modal runs provide the best results in all the cases except for case k = 5. Surprisingly, the visual runs perform slightly better than the text and the multi-modal approaches for this case. However, assuming independent and normal distributed average precision values the significance tests show that the difference is not statistically significant.</p><p>We consider the case k = 20 as the most realistic scenario since users do not often inspect more than 2 pages of results. Especially for grid-like result interface views, where each page can contain 20 to 50 results, we consider k = 20 more realistic than k = 5. In this case the proposed methods achieve the best performance with 0.2606 and 0.2635 respectively. Again, the significance tests do not find any significance difference between the three best approaches. However, applying different fusion rules for combining visual and text information (such as linear-weighting) could further improve the results of the mixed approaches.</p><p>It can be noted that as the k increases, the performance improvement also increases, highlighting the added value of relevance feedback. Larger values of k were not explored as this scenarios were judged as unrealistic.</p><p>In the visual runs using Rocchio for combining the visual queries is performing worse than late fusion. This comes in accordance with the findings in <ref type="bibr" target="#b3">[3]</ref>. The reason behind this could be that the large visual diversity of relevant images in medicine and the curse of dimensionality cause the modified vector to behave as an outlier in the high dimensional visual feature space. In the mixed runs the difference between the two methods is not statistically significant with Rocchio performing slightly better than the late fusion.</p><p>Irrelevant results were ignored, as they often have little or no impact on the retrieval performance <ref type="bibr" target="#b10">[10,</ref><ref type="bibr" target="#b16">16]</ref>. More importantly, the ground truth of the dataset used contains a much larger portion of annotated irrelevant results than relevant ones. This was considered to potentially simulate an unrealistic scenario, as users do not usually mark many results as negative examples. Having too many negative examples could also cause the modified vector to follow an outlier behaviour. Preliminary results confirmed this hypothesis, where the use of negative results for relevance feedback can decrease performance after the first iteration.</p><p>It should be noted that this is an automated relevance feedback experiment of positive only feedback and that in selective relevance feedback situations the retrieval performance is expected to perform even better. A larger number of steps could be investigated but this might be unrealistic, given the fact that physicians have little time and stop after a few minutes of search <ref type="bibr" target="#b8">[8]</ref>. Often users will only test a few steps of relevance feedback at the most.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSIONS</head><p>This paper proposes the use of multi-modal information when applying relevance feedback to medical image retrieval. An experiment was set up to simulate the relevance feedback of a user on a number of medicine-related topics from Im-ageCLEF 2012.</p><p>In general, all the techniques evaluated in this study improve the performance, which shows the added value of rele-vance feedback. Text-based relevance feedback showed consistently good results. Visual-based techniques showed competitive performance for small shortlist sizes, underperforming in the rest of the cases. The proposed multi-modal approaches showed promising results slightly outperforming the text-based one but without statistical significance.</p><p>More fusion techniques are going to be evaluated in the future. Comparison to manual query refinement by users is considered in future plans, to assess relevance feedback as a concept in medical image retrieval. The addition of semantic search is also of interest, to take advantage of the structured knowledge of the medical ontologies such as RadLex (Radiology Lexicon) and MeSH (Medical Subject Headings).</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Mean average precision per search iteration for k = 5.</figDesc><graphic coords="2,314.12,87.87,220.56,132.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Mean average precision per search iteration for k = 20.</figDesc><graphic coords="3,58.41,263.66,220.75,132.91" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Mean average precision per search iteration for k = 50.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Mean average precision per search iteration for k = 100.</figDesc><graphic coords="3,314.12,87.87,220.75,132.91" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Best mAP scores</figDesc><table><row><cell>Run</cell><cell>k = 5</cell><cell>k = 20</cell><cell>k = 50</cell><cell>k = 100</cell></row><row><cell>text</cell><cell>0.197 (1)</cell><cell>0.2544 (4)</cell><cell>0.3107 (3)</cell><cell>0.3349 (4)</cell></row><row><cell>visual lf</cell><cell>0.2099 (2)</cell><cell>0.2243 (3)</cell><cell>0.2405 (4)</cell><cell>0.2553 (3)</cell></row><row><cell>visual roc</cell><cell>0.2096 (2)</cell><cell>0.2187 (2)</cell><cell>0.2249 (3)</cell><cell>0.2268 (2)</cell></row><row><cell>mixed lf</cell><cell>0.1971 (3)</cell><cell>0.2606 (4)</cell><cell>0.3079 (4)</cell><cell>0.3487 (3)</cell></row><row><cell>mixed roc</cell><cell>0.1947 (1)</cell><cell>0.2635 (4)</cell><cell>0.3207 (4)</cell><cell>0.3466 (4)</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">ACKNOWLEDGEMENTS</head><p>This work was supported by the EU 7th Framework Program in the context of the Khresmoi project (grant 257528).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Mammogram retrieval: Image selection strategy of relevance feedback for locating similar lesions</title>
		<author>
			<persName><forename type="first">C.-C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P.-J</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-Y</forename><surname>Gwo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-H</forename><surname>Wei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Digital Library Systems (IJDLS)</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="45" to="53" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Fusion techniques for combining textual and visual information retrieval</title>
		<author>
			<persName><forename type="first">A</forename><surname>Depeursinge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Springer International Series On Information Retrieval</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Clough</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Deselaers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Caputo</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="95" to="114" />
		</imprint>
	</monogr>
	<note>ImageCLEF</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The medGIFT group in ImageCLEFmed 2012</title>
		<author>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Markonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Eggel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2012</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Bag of colors for biomedical document image classification</title>
		<author>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Markonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Medical Content-based Retrieval for Clinical Decision Support, MCBR-CDS 2012</title>
		<title level="s">Lecture Notes in Computer Sciences (LNCS</title>
		<editor>
			<persName><forename type="first">H</forename><surname>Greenspan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2013-10">Oct. 2013</date>
			<biblScope unit="page" from="110" to="121" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The medGIFT group in ImageCLEFmed 2013</title>
		<author>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Markonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schaer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Eggel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2013 (Cross Language Evaluation Forum)</title>
				<imprint>
			<date type="published" when="2013-09">September 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Relevance feedback in content-based image retrieval: a survey</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M</forename><surname>Allinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook on Neural Information Processing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="433" to="469" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">User tests for assessing a medical image retrieval system: A pilot study</title>
		<author>
			<persName><forename type="first">D</forename><surname>Markonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Baroz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Ruiz De Castaneda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Boyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MEDINFO</title>
		<imprint>
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A survey on visual information search behavior and requirements of radiologists</title>
		<author>
			<persName><forename type="first">D</forename><surname>Markonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Holzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dungs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vargas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Langs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kriewel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Methods of Information in Medicine</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="539" to="548" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Overview of the ImageCLEF 2012 medical image retrieval and classification tasks</title>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kalpathy-Cramer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Demner Fushman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Antani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Eggel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2012 (Cross Language Evaluation Forum)</title>
				<imprint>
			<date type="published" when="2012-09">September 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Strategies for positive and negative relevance feedback in image retrieval</title>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Squire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marchand-Maillet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pun</surname></persName>
		</author>
		<idno>00.01</idno>
		<imprint>
			<date type="published" when="2000-01">Jan. 2000</date>
			<publisher>rue G n ral Dufour</publisher>
			<biblScope unit="volume">24</biblScope>
			<pubPlace>CH-1211 Gen ve, Switzerland</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Computer Vision Group, Computing Centre, University of Geneva</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learning from user behavior in image retrieval: Application of the market basket analysis</title>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Squire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="65" to="77" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
	<note>Special Issue on Content-Based Image Retrieval</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Rahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhattacharya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Desai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Technology in Biomedicine</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="58" to="69" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note>IEEE Transactions on</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Relevance feedback in information retrieval</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Rocchio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The SMART Retrieval System, Experiments in Automatic Document Processing</title>
				<meeting><address><addrLine>Englewood Cliffs, New Jersey, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Prentice Hall</publisher>
			<date type="published" when="1971">1971</date>
			<biblScope unit="page" from="313" to="323" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Relevance feedback techniques in interactive content-based image retrieval</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Rui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mehrotra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Storage and Retrieval for Image and Video Databases VI</title>
				<editor>
			<persName><forename type="first">I</forename><forename type="middle">K</forename><surname>Sethi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Jain</surname></persName>
		</editor>
		<imprint>
			<publisher>SPIEProc</publisher>
			<date type="published" when="1997-12">Dec. 1997</date>
			<biblScope unit="volume">3312</biblScope>
			<biblScope unit="page" from="25" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A survey on the use of relevance feedback for information access systems</title>
		<author>
			<persName><forename type="first">I</forename><surname>Ruthven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lalmas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Knowledge Engineering Review</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">02</biblScope>
			<biblScope unit="page" from="95" to="145" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Improving retrieval performance by relevance feedback</title>
		<author>
			<persName><forename type="first">G</forename><surname>Salton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Buckley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Readings in information retrieval</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">5</biblScope>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Combination of multiple searches</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Shaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Fox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">TREC-2: The Second Text REtrieval Conference</title>
				<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="243" to="252" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Content-based query of image databases: inspirations from text retrieval</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Squire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA &apos;99)</title>
				<editor>
			<persName><forename type="first">B</forename><forename type="middle">K</forename><surname>Ersboll</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Johansen</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2000">2000</date>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="1193" to="1198" />
		</imprint>
	</monogr>
	<note>Pattern Recognition Letters</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Image digestion and relevance feedback in the ImageRover WWW search engine</title>
		<author>
			<persName><forename type="first">L</forename><surname>Taycher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Cascia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sclaroff</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="85" to="94" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Iterative refinement by relevance feedback in content-based digital image retrieval</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Wood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">W</forename><surname>Campbell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">T</forename><surname>Thomas</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="13" to="20" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
