<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Appendix D Results of the Domain Specific Track</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giorgio Maria Di Nunzio</string-name>
          <email>dinunzio@dei.unipd.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicola Ferro</string-name>
          <email>ferro@dei.unipd.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Engineering University of Padua Italy</institution>
        </aff>
      </contrib-group>
      <fpage>335</fpage>
      <lpage>376</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Introduction
3</p>
      <p>Results for CLEF 2008 Domain Specific Track
3. Individual Experiment Results and Graphs
This section provides the individual results for each official experiment. For each experiment the following tables and graphs are
shown:
- Overall statistics and information
- Interpolated recall vs precision averages plot
- Average precision statistics and box plot
- Average precision comparison to median plot
- Document cutoff levels vs precision at DCL plot
- R-Precision statistics and box plot
- R-Precision comparison to median plot
Topics are identified with DOIs, as well as the experiments. The prefix for the DOI of a topic is 10.2452. The following example
shows how to build the DOI for a topic given its number: for topic 200-AH, the corresponding DOI is 10.2452/200-AH
List of Submitted Experiments
7</p>
    </sec>
    <sec id="sec-2">
      <title>United States 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-3">
      <title>CLEF2008.CHESHIRE.BRKMOENTD</title>
    </sec>
    <sec id="sec-4">
      <title>United States 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-5">
      <title>CLEF2008.CHESHIRE.BRKMOENTDN</title>
    </sec>
    <sec id="sec-6">
      <title>United States 10.2415/DS-MONO-DE</title>
    </sec>
    <sec id="sec-7">
      <title>CLEF2008.CHESHIRE.BRKMODETD</title>
    </sec>
    <sec id="sec-8">
      <title>United States 10.2415/DS-MONO-DE</title>
    </sec>
    <sec id="sec-9">
      <title>CLEF2008.CHESHIRE.BRKMODETDN 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-10">
      <title>CLEF2008.UNINE.UNINEDSEN1</title>
      <p>10.2415/DS-MONO-DE</p>
    </sec>
    <sec id="sec-11">
      <title>CLEF2008.CHEMNITZ.CUT_MERGED 10.2415/DS-MONO-DE</title>
    </sec>
    <sec id="sec-12">
      <title>CLEF2008.CHEMNITZ.CUT_MERGED_THES 10.2415/DS-MONO-DE</title>
    </sec>
    <sec id="sec-13">
      <title>CLEF2008.UNINE.UNINEDSDE1</title>
      <p>10.2415/DS-MONO-DE</p>
    </sec>
    <sec id="sec-14">
      <title>CLEF2008.UNINE.UNINEDSDE2</title>
      <p>10.2415/DS-MONO-DE</p>
    </sec>
    <sec id="sec-15">
      <title>CLEF2008.UNINE.UNINEDSDE3</title>
      <p>10.2415/DS-MONO-DE</p>
    </sec>
    <sec id="sec-16">
      <title>CLEF2008.UNINE.UNINEDSDE4</title>
      <p>10.2415/DS-MONO-RU</p>
    </sec>
    <sec id="sec-17">
      <title>CLEF2008.CHEMNITZ.CUT_MERGED</title>
      <p>10.2415/DS-MONO-EN</p>
    </sec>
    <sec id="sec-18">
      <title>CLEF2008.AMSTERDAM.UAMSBASELINE 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-19">
      <title>CLEF2008.AMSTERDAM.UAMSCONCEPTMODELS 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-20">
      <title>CLEF2008.AMSTERDAM.UAMSPARSRELMODELS 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-21">
      <title>CLEF2008.AMSTERDAM.UAMSRELMODELS 10.2415/DS-MONO-EN</title>
    </sec>
    <sec id="sec-22">
      <title>CLEF2008.CHEMNITZ.CUT_MERGED 10.2415/DS-MONO-ENCLEF2008.CHEMNITZ.CUT_MERGED_THES chemnitz</title>
      <p>en
en
en
en
en
en
en
en
en
en
en
en
de
de
de
de
de
de
de
de
de
de
ru
ru
ru
ru
ru
ru
ru
ru
ru
en
en
en
en
de
de
ru
ru
de</p>
      <p>TN
T
TN
TN
TD
TD
TD
TDN
TD
TDN
TD
TD
TD
TD
TD
TDN
TD
TDN
TD
TD
TD
TD
TD
TD
TDN
TD
TDN
TD
TD
TD
TDN
TD
TD
TD
TD
TD
TDN
TD
TDN
TD
de
de
de
de
en
en
ru
en
en
en
en
en
en
ru
ru
ru
de
de
en
en
en
en
en
de
de
en
en
ru
ru</p>
      <p>TD
TD
TD
TD
TD
TDN
TD
TDN
TD
TD
TDN
TDN
TD
TD
TD
TD
TD
TDN
TD
TDN
TD
TD
TD
TD
TDN
TD
TDN
TD
TDN
Track Overview Results and Graphs
11</p>
      <sec id="sec-22-1">
        <title>Track Overview Results and</title>
        <p>Domain−Specific Monolingual English</p>
        <sec id="sec-22-1-1">
          <title>Task</title>
        </sec>
        <sec id="sec-22-1-2">
          <title>Top 5 Participants − Standard</title>
        </sec>
        <sec id="sec-22-1-3">
          <title>Recall Levels vs Mean Interpolated Precision 100%</title>
          <p>0%</p>
          <p>0%
1
0.8
0.6
0.4
0.2
e
c
n
re 0
e
if
D
−0.2
−0.4
−0.6
−0.8
−1
chemnitz [Experiment CUT_MERGED; MAP
amsterdam [Experiment UAMSCONCEPTMODELS; MAP 29.22%; Pooled]</p>
          <p>amsterdam [Experiment UAMSRELMODELS; MAP 23.96%; Pooled]
amsterdam [Experiment UAMSPARSRELMODELS; MAP 23.96%; Pooled]
amsterdam [Experiment UAMSBASELINE; MAP 20.77%; Pooled]
hug [Experiment HUGMONO; MAP 17.14%; Not Pooled]
0%
10%
20%
30%
70%
80%
90%</p>
          <p>100%
40% 50% 60%</p>
          <p>Average Precision
10.2455/TUKEY_T_TEST.960B6B5E536CA28011AB5EDCFC0FC38A</p>
          <p>Domain−Specific Monolingual English Task − Tukey T test with "top group" highlighted
s
t
n
e
m
i
r
e
p
x
E</p>
        </sec>
        <sec id="sec-22-1-4">
          <title>CUT_MERGED</title>
        </sec>
        <sec id="sec-22-1-5">
          <title>UNINEDSEN1</title>
        </sec>
        <sec id="sec-22-1-6">
          <title>CUT_MERGED_THES</title>
        </sec>
        <sec id="sec-22-1-7">
          <title>BRKMOENTD</title>
        </sec>
        <sec id="sec-22-1-8">
          <title>BRKMOENTDN 976 969</title>
        </sec>
        <sec id="sec-22-1-9">
          <title>UAMSCONCEPTMODELS</title>
        </sec>
        <sec id="sec-22-1-10">
          <title>UAMSPARSRELMODELS</title>
        </sec>
        <sec id="sec-22-1-11">
          <title>UAMSRELMODELS</title>
        </sec>
        <sec id="sec-22-1-12">
          <title>UAMSBASELINE</title>
          <p>HUGMONO</p>
        </sec>
      </sec>
      <sec id="sec-22-2">
        <title>Track Overview Results and</title>
        <p>Domain−Specific Monolingual English
100%</p>
        <sec id="sec-22-2-1">
          <title>Task</title>
        </sec>
        <sec id="sec-22-2-2">
          <title>Top 5 Participants − Retrieved documents vs Mean Precision</title>
          <p>−0.2
−0.4
−0.6
−0.8
−1
10
15
20
30</p>
        </sec>
        <sec id="sec-22-2-3">
          <title>Retrieved 100</title>
        </sec>
        <sec id="sec-22-2-4">
          <title>Documents (logarithmic</title>
          <p>200
scale)
500
1000
Domain−Specific Monolingual English Task Top 5 Participants − Comparison to Median R−Precision by Topic (Topics 201−DS to 225−DS)
chemnitz [Experiment CUT_MERGED; R−Prec 41.07%; Pooled]
unine [Experiment UNINEDSEN1; R−Prec 39.87%; Pooled]
darmstadt [Experiment 976; R−Prec 37.76%; Pooled]
cheshire [Experiment BRKMOENTD; R−Prec 35.31%; Pooled]
amsterdam [Experiment UAMSCONCEPTMODELS; R−Prec 33.45%; Pooled]
201−DS
202−DS
203−DS
204−DS
Domain−Specific Monolingual English Task − Box Plot of the Topics
chemnitz [Experiment CUT_MERGED; R−Prec 41.07%; Pooled]
chemnitz [Experiment CUT_MERGED_THES; R−Prec 40.89%; Pooled]
unine [Experiment UNINEDSEN1; R−Prec 39.87%; Pooled]
darmstadt [Experiment 976; R−Prec 37.76%; Pooled]
darmstadt [Experiment 969; R−Prec 35.35%; Pooled]</p>
          <p>Domain−Specific Monolingual English Task − Tukey T test with "top group" highlighted
s
t
n
e
m
i
r
e
p
x
E</p>
        </sec>
        <sec id="sec-22-2-5">
          <title>CUT_MERGED</title>
        </sec>
        <sec id="sec-22-2-6">
          <title>CUT_MERGED_THES</title>
        </sec>
        <sec id="sec-22-2-7">
          <title>UNINEDSEN1 976 969</title>
        </sec>
        <sec id="sec-22-2-8">
          <title>BRKMOENTD</title>
        </sec>
        <sec id="sec-22-2-9">
          <title>BRKMOENTDN</title>
        </sec>
        <sec id="sec-22-2-10">
          <title>UAMSCONCEPTMODELS</title>
        </sec>
        <sec id="sec-22-2-11">
          <title>UAMSPARSRELMODELS</title>
        </sec>
        <sec id="sec-22-2-12">
          <title>UAMSRELMODELS</title>
        </sec>
        <sec id="sec-22-2-13">
          <title>UAMSBASELINE</title>
          <p>HUGMONO
Domain−Specific Bilingual Russian Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS</p>
          <p>Domain−Specific Bilingual Russian Task − Retrieved documents vs Mean Precision
BRKBIENRUTD
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Bilingual Russian Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS</p>
          <p>Domain−Specific Bilingual Russian Task − Box plot of the Topics of the Experiment
Domain−Specific Bilingual Russian Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS</p>
          <p>Domain−Specific Bilingual Russian Task − Retrieved documents vs Mean Precision
BRKBIENRUTDN
Precision averages (%) for individual
queries
0%5
Domain−Specific Bilingual Russian Task − Box plot of the Topics of the Experiment</p>
          <p>Domain−Specific Bilingual Russian Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 65.60
10 docs 64.40
15 docs 59.20
20 docs 56.60
30 docs 52.93
100 docs 41.08
200 docs 30.50
500 docs 18.83
1000 docs 11.83
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
34.34
100%
90%
80%
70%
60%
0%5
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
1
0.8
0.6
0.4
0.2
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
2,977 Pooled
2
AUTOMATIC
English
title, description
false</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 55.20
10 docs 57.60
15 docs 54.93
20 docs 52.60
30 docs 50.67
100 docs 36.16
200 docs 28.36
500 docs 18.39
1000 docs 11.91
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
33.83
100%
90%
80%
70%
60%
0%5
R_PRECISION
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
5
t
n
e
ie4
m
r
xp
E
tfh3
e
so
c
ip2
o
T
f
o
r
e
b1
m
u
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
AVERAGE_PRECISION
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
2,721 Pooled</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
6
t
iren5
m
e
xp
E4
e
h
tfscop3
i
o
f2
T
o
r
e
bu1
m
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 51.20
10 docs 48.00
15 docs 46.93
20 docs 45.60
30 docs 42.53
100 docs 33.40
200 docs 24.92
500 docs 16.33
1000 docs 10.88
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
29.78
100%
90%
80%
70%
60%
0%5
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
1
0.8
0.6
0.4
0.2
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,617 Pooled
1
AUTOMATIC
German
title, description
false
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMUDETD
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 22.40
10 docs 16.00
15 docs 17.87
20 docs 17.80
30 docs 18.13
100 docs 15.12
200 docs 12.84
500 docs 9.48
1000 docs 6.47
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
16.03
100%
90%
80%
70%
60%
0%5
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
R_PRECISION
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Overall statistics for 25 queries :
Total number of documents over all queries
Retrieved
Relevant
Relevant retrieved
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,490 Pooled
2
AUTOMATIC
German
title, description, narrative
false
0.0231 Multilingual from German using TREC 2 and Blind
0.1673 Feedback on title and description and narrative
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMUDETDN
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 12.80
10 docs 14.00
15 docs 16.80
20 docs 16.20
30 docs 15.33
100 docs 14.48
200 docs 11.44
500 docs 8.57
1000 docs 5.96
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
14.04
100%
90%
80%
70%
60%
40%
30%
20%
10%
0%5
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
R_PRECISION
Precision averages (%) for individual
queries</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
8
tn7
e
m
ire6
xp
E
tfscoep45
h
i
o3
T
f
o
r2
e
b
m
u1
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
BRKMUDETDN
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Overall statistics for 25 queries :
Total number of documents over all queries
Retrieved
Relevant
Relevant retrieved
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,834 Pooled</p>
          <p>Domain−Specific Multilingual Task − Standard Recall Levels vs Mean Interpolated Precision
BRKMUENTD
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
−0.2
−0.4
−0.6
−0.8
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMUENTD
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 16.80
10 docs 19.60
15 docs 19.73
20 docs 20.60
30 docs 22.67
100 docs 19.44
200 docs 16.86
500 docs 11.65
1000 docs 7.34
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
19.54
100%
90%
80%
70%
60%
40%
30%
20%
10%
0%5</p>
          <p>Domain−Specific Multilingual Task − Retrieved documents vs Mean Precision
BRKMUENTD
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
R_PRECISION
Precision averages (%) for individual
queries
−0.2
−0.4
−0.6
−0.8
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
6
t
irne5
m
e
xp
E4
e
h
ftcspo3
i
o
f2
T
o
r
e
bu1
m
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
BRKMUENTD
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Overall statistics for 25 queries :
Total number of documents over all queries
Retrieved
Relevant
Relevant retrieved
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,774 Pooled
4
AUTOMATIC
English
title, description, narrative
false
Domain−Specific Multilingual Task − Standard Recall Levels vs Mean Interpolated Precision
BRKMUENTDN
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
−0.2
−0.4
−0.6
−0.8
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMUENTDN
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 19.20
10 docs 20.40
15 docs 21.33
20 docs 22.80
30 docs 23.20
100 docs 20.56
200 docs 16.56
500 docs 11.30
1000 docs 7.10
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
18.70
100%
90%
80%
70%
60%
40%
30%
20%
10%
0%5</p>
          <p>Domain−Specific Multilingual Task − Retrieved documents vs Mean Precision
BRKMUENTDN
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
R_PRECISION
Precision averages (%) for individual
queries
9.09
21.15
16.50
26.97
0.91
17.70
6.82
30.89
8.82
16.07
20.34
22.32
1
0.8
0.6
0.4
0.2
ce
reen 0
if
D
−0.2
−0.4
−0.6
−0.8
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
6
t
irne5
m
e
xp
E4
e
h
ftscop3
i
o
f2
T
o
r
e
ub1
m
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
BRKMUENTDN
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Overall statistics for 25 queries :
Total number of documents over all queries
Retrieved
Relevant
Relevant retrieved
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,385 Pooled
5
AUTOMATIC
Russian
title, description
false
0.0209 Multilingual from Russian using TREC 2 and Blind
0.1509 Feedback on title and description
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
2.48
13.05
0.18
2.27
0.08
3.93
2.78
22.62
2.88
0.24
0.09
8.91
1
0.8
0.6
0.4
0.2
ce
reen 0
if
D
−0.2
−0.4
−0.6
−0.8
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMURUTD
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 15.20
10 docs 14.00
15 docs 12.27
20 docs 12.60
30 docs 12.80
100 docs 12.28
200 docs 10.52
500 docs 7.90
1000 docs 5.54
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
12.19
100%
90%
80%
70%
60%
40%
30%
20%
10%
0%5</p>
          <p>Domain−Specific Multilingual Task − Retrieved documents vs Mean Precision
BRKMURUTD
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
R_PRECISION
Precision averages (%) for individual
queries
201-DS
202-DS
203-DS
204-DS
205-DS
206-DS
207-DS
208-DS
209-DS
210-DS
211-DS
212-DS
213-DS
7.69
19.57
5.53
10.87
1.92
13.39
14.01
0.81
54.90
5.75
28.76
11.91
9.95
214-DS
215-DS
216-DS
217-DS
218-DS
219-DS
220-DS
221-DS
222-DS
223-DS
224-DS
225-DS
3.64
19.23
3.88
8.88
0.46
9.73
11.36
28.27
11.27
3.57
0.98
18.30
1
0.8
0.6
0.4
0.2
ce
reen 0
if
D
−0.2
−0.4
−0.6
−0.8
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision</p>
          <p>Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
8
tn7
e
m
ire6
xp
E
ftcspoe45
h
i
o3
T
f
o
r2
e
b
m
u1
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
BRKMURUTD
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Overall statistics for 25 queries :
Total number of documents over all queries
Retrieved
Relevant
Relevant retrieved
Priority</p>
          <p>Query Construction
25,000 Source Language
4,715 Topic Fields
1,459 Pooled
6
AUTOMATIC
Russian
title, description, narrative
false
0.0226 Multilingual from Russian using TREC 2 and Blind
0.1506 Feedback on title and description and narrative
20%
10%
0%0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
          <p>Recall</p>
          <p>Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>Average Precision
Domain−Specific Multilingual Task − Distribution of the Topics of the Experiment
0.99
8.08
0.19
1.72
0.02
4.22
2.65
41.40
3.71
0.65
0.21
7.74
1
0.8
0.6
0.4
0.2
ce
reen 0
if
D
−0.2
−0.4
−0.6
−0.8
20
t
n
e
m
i
rpe15
x
E
e
h
ftso10
cp
i
o
T
f
o
re 5
b
m
u
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Average Precision
Precision averages (%) for individual
queries
BRKMURUTDN
−1 201−DS 202−DS 203−DS 204−DS 205−DS 206−DS 207−DS 208−DS 209−DS 210−DS 21 −DS 212−DS Topi2c13Id−eDnStifier214−DS 215−DS 216−DS 217−DS 218−DS 219−DS 2 0−DS 2 1−DS 2 2−DS 2 3−DS 2 4−DS 2 5−DS
Docs Cutoff Levels Precision at DCL (%)
5 docs 10.40
10 docs 10.00
15 docs 12.53
20 docs 13.80
30 docs 14.80
100 docs 13.96
200 docs 11.88
500 docs 8.47
1000 docs 5.84
R-Precision (precision after R document retrieved,
where R = Relevant retrieved)
100%
90%
80%
70%
60%
Domain−Specific Multilingual Task − Box plot of the Topics of the Experiment
Interquartile range
Mean
Mean With No Outliers
Std With No Outliers
Precision averages (%) for individual
queries
201-DS
202-DS
203-DS
204-DS
205-DS
206-DS
207-DS
208-DS
209-DS
210-DS
211-DS
212-DS
213-DS
16.72
18.48
7.54
2.17
1.92
5.51
10.14
8.06
54.90
4.42
24.03
19.57
7.33
214-DS
215-DS
216-DS
217-DS
218-DS
219-DS
220-DS
221-DS
222-DS
223-DS
224-DS
225-DS
1
8
tn7
e
m
ire6
xp
E
tfsceop45
h
i
o3
T
f
o
r2
e
b
m
u1
N
00% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%</p>
          <p>R−Precision
BRKMURUTDN</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          0.0064 Bilingual English to
          <article-title>Russian using TREC2 with Blind 0.0775 Feedback on title and description 0.0047 Bilingual English to Russian using TREC2 with Blind 0.0526 Feedback on title and description and narrative 0.0142 Multilingual from German using TREC 2 and Blind 0.1978 Feedback on title and description 0.0526 Multilingual from English using TREC 2 and Blind 0.2062 Feedback on title and description 0.0501 Multilingual from English using TREC 2 and Blind 0.2023 Feedback on title and description</article-title>
          and narrative
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>