=Paper=
{{Paper
|id=Vol-1176/CLEF2010wn-MLQA10-Voorhees2010
|storemode=property
|title=Reflections on TREC QA
|pdfUrl=https://ceur-ws.org/Vol-1176/CLEF2010wn-MLQA10-Voorhees2010.pdf
|volume=Vol-1176
|dblpUrl=https://dblp.org/rec/conf/clef/Voorhees10
}}
==Reflections on TREC QA==
Reflections on TREC QA - Ellen Voorhees, NIST, USA The TREC (later TAC) Question Answering track reinvigorated the question answering research community, fostering extensive research on different question types and finding answers in different kinds of corpora. Parallel evaluations extended the research further to include a variety of languages and media types. In recent years, the TAC QA track evolved into the Knowledge Base Population (KBP) track, where the task is (essentially) factoid-question answering combined with entity resolution: answer strings must be resolved into the appropriate nodes of a pre-existing ontology. KBP is itself viewed as an initial test of capability for Machine Reading systems. In this talk I will reflect on what the lessons learned from the QA track suggest for creating a Machine Reading evaluation with the goal of creating a vibrant Machine Reading research community.