<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>SUBJECTIVE MODEL ANSWER GENERATION TOOL FOR DIGITAL EVALUATION SYSTEMS</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shubham</string-name>
          <email>Shubhamlive1010@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dr. Arpana Rawal</string-name>
          <email>arpana.rawal@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dr. Ani Thomas</string-name>
          <email>ani.thomas@bitdurg.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Professor, Bhilai Institute of Technology</institution>
          ,
          <addr-line>Durg, +91-9893165872</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Professor, Bhilai Institute of Technology</institution>
          ,
          <addr-line>Durg, +91-9907180993</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Research Scholar, Bhilai Institute of Technology</institution>
          ,
          <addr-line>Durg, +91-9644026902</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Automated subjective answer assessments in modern digital evaluation environments are promising structural consistency, but they distort the very nature of expressing complex and contextrich information put up for evaluation. In modern teachinglearning environments, with wide variety of biasing observed while fabricating humane scripted Memoranda-Of-Instructions, it becomes difficult to evaluate subjective answers with appropriate justification. Answer evaluation systems have seen an extensive research by academicians since few decades. On the other hand, research on subjective model answer generation is still in its infancy stage. Lately, an algorithm for subjective model answer generation has become necessary for developing a generic framework sufficing all types of subjective questions. In this paper, we described one such algorithm for generating model answers for all types of descriptive (subjective) questions from a given text corpus. • CCS → Information systems → Information retrieval → Retrieval tasks and goals → Question answering.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>As the modern education system is augmented with digital
environments round the globe, the evaluation systems are also
getting digitized progressively.</p>
      <p>
        With the advent of automated subjective answering evaluation
tools like Electrnic Essay Rater (E-rater) by Burstein, Kukich,
Wolff, Chi and Chodorow (1998), Conceptual Rater (C-rater)
(Valenti et al., 2003), Intelligent Essay Assessor (IEA) (Valenti et
al., 2003), Educational Testing Service (ETS-I) by Whittington
and Hunt (1999), BETSY (Valenti et al., 2003), Schema Extract
Analyze and Report (SEAR) (Christie, 1999), a drastic drift is
seen in preparation of question paper manuscripts from MCQ
questionnaire to a blend of both objective and subjective questions
[
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]. The Academicians are observed spending more of their time
in setting question papers and evaluating answer, rather than
analyzing the scores and counselling the students. According to
Copyright © 2017 for the individual papers by the papers’ authors.
Copying permitted for private and academic purposes. This volume is
published and copyrighted by its editors.
recent statistics it takes one month on an average to evaluate 700
candidate answer script for six subjects in total. Thus, taking
almost two to three months to declare answer for the same lot of
students. Apart from this, even with expert evaluators it is not
possible for anyone to justify which answer is better and why?
Envisioning such a series of hurdles, an attempt is being made to
ease the task of manual answer generation by obtaining machine
generated answers to some question categories.
      </p>
      <p>The rest of this paper is organized as follows: Section 2 addresses
preprocessing issues that have been addressed by other systems
for model answer generation. Section 3 outlines the subjective
answer generation algorithm. Section 4 suggests further
applications and developments possible in near future.
2. PRE-PROCESSING ISSUES</p>
      <p>The considerable issues that are needed to be investigated in
depth before building the prototype tool of generating model
answers are enumerated below:</p>
      <p>
        Language Support: One of the design issues in the algorithm
demands the concurrent modification of passive data objects in
already existing dictionary, while checking for the terms in that
domain-specific vocabulary in an attempt to expand the
vocabulary at runtime. Not all languages support the above
mentioned feature. Hence, there arises a need to choose the
appropriate language for tool development. This issue can also be
resolved by using the method as described by D. Clarke et al. in
their article [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Supporting Domain-specific Vocabularies: Using ‘WordNet’
as the source of open-domain vocabulary may seem optimal at
first, but it usually hinders the generation of most accurate
answers, demanding information retrieval for a narrowly specified
subject domain. The Information Retrieval model built for
Question-Answering (QA) systems by IBM’s statistic system
finds greatest hindrance observed in the last step of trimming set
of optimal sentences from the ranked set of passages obtained in
the previous step. The best alternative for reducing such system
errors is to use restricted domains as back-ground knowledge
rather than open-domains [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In another exhaustive survey put up
by L. Hirschman and R. Gaizauskas, they emphasized on the
crucial role of passages in extraction of answers to the subjective
questions through IR techniques [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3. SUBJECTIVE ANSWER GENERATION
ALGORITHM
The syntax used in pseudo code borrows some of the elements
from java syntax. All the input and output objects are specified in
bold. All the language constructs like conditionals and loops use
italics style. Square brackets denote index for storing and
accessing array elements. Assignment operation is denoted by
Symbolic notation ←. The description of various variables and
procedures used in this pseudo code are as follows:
      </p>
      <p>Q is the raw question string to be used for finding
answers
K is the List of keyword strings present in question Q
which can be generated using open source NLP tools.
C is the List of Corpus sections as string which can be
sections or chapters defined in a standard text book
V is the source vocabulary which can be generated for
all the keywords of Corpus C using either wordnet for
enhanced domain vocab or manual human intervention
for restricting domain.</p>
      <p>Get-Entry-Points : Function for initial filtering based
on count of Question keywords K found in different
sections of Text Corpus.</p>
      <p>Get-Seed-Sentences : Function for getting seed
sentences present in a particular section based on
keyword K and keyword vocab for given question Q.
Get-Section-Sentences : Function to return list of all
sentences present in a text paragraph from a section.</p>
      <p>Get-Vocab : This procedure returns a list of string for
all terms supplied separately. Each list in returned
composite list is synonym for respective words provided
in input list.</p>
      <p>Get-Keywords : This procedure returns a list of related
keywords based on NLP dependencies provided by NLP
parsers.</p>
      <p>Get-Co-Occurring-NP : This procedure returns
cooccurring NP after performing anaphora resolution of
supplied text.</p>
      <p>The algorithm for generating answers is as follows:
ALGORITHM Generate-Answer is</p>
      <p>INPUT: Question Q with Keywords K,</p>
      <p>Text Corpus C as List of section fragments,</p>
      <p>Vocab Source V
OUTPUT: Answer A comprising concatenated fragments</p>
    </sec>
    <sec id="sec-2">
      <title>E ← Get-Entry-Points(Q,C)</title>
      <p>CREATE an empty list Answer_Fragments of type String
FOR i = 0 to E.size do</p>
      <p>Cur_Segment ← E[i]
Seed_Sentences ← Get-Seed-Sentences(E[i],K,C)
Section_Sentences ← Get-Section-Sentences(E[i],C)
CREATE an empty list SectionWise_Fragments of type
string</p>
      <p>FOR j = 0 to Seed_Sentences.size do</p>
    </sec>
    <sec id="sec-3">
      <title>Seed_Index ← Get-Seed</title>
      <p>Index(Section_Sentences,Seed_Sentences[j])</p>
      <p>CREATE List Seed_Vocab of type String
CREATE List Co_Occurring_NP of type String</p>
      <p>Seed_Vocab ←
Get-Vocab(GetKeywords(Seed_Sentences[Seed_Index]),V)</p>
      <p>Left_Marker ← Seed_Index
Right_Marker ← Seed_Index</p>
      <p>WHILE there exists a String from Seed_Vocab or
Co_Occurring_NP</p>
      <p>in Seed_Sentences[Left_Marker]</p>
      <p>Cur_Co_Occurring_NP ←
Get-Co-OccurringNP(Seed_Sentences[Left_Marker])
add Cur_Co_Occurring_NP to Co_Occurring_NP
Left_Marker ← Left_Marker - 1
IF Left_Marker = 0</p>
      <p>break from while loop</p>
      <p>END IF
END WHILE</p>
      <p>WHILE there exists a String from Seed_Vocab or
Co_Occurring_NP</p>
      <p>in Seed_Sentences[Right_Marker]</p>
      <p>Cur_Co_Occurring_NP ←
Get-Co-OccurringNP(Seed_Sentences[Right_Marker])
add Cur_Co_Occurring_NP to Co_Occurring_NP
Right_Marker ← Right_Marker + 1
IF Right_Marker = Seed_Sentences.size</p>
      <p>break from while loop</p>
      <p>END IF
END WHILE
INITIALIZE Cur_Frag to empty String
FOR k = Left_Marker to Right_Marker</p>
      <p>Concatenate Seed_Sentences[k] to Cur_Frag
END FOR</p>
      <p>Add Cur_Frag to SectionWise_Fragments
END FOR
Remove Duplicate sentences from SectionWise_Fragments
INITIALIZE Cur_Section_Answer to empty String
FOR j = 0 to SectionWise_Fragments.size do</p>
      <p>Concatenate SectionWise_Fragments[j] to
Cur_Section_Answer</p>
      <p>END FOR</p>
      <p>Add Cur_Section_Answer to Answer_Fragments
END FOR
INITIALIZE A to empty String
FOR i = 0 to Answer_Fragments.size</p>
      <p>Concatenate Answer_Fragments[i] to A
END FOR</p>
      <p>RETURN A
4. FURTHER APPLICATIONS AND
DEVELOPMENT
This tool is observed to provide answer fragments that go fairly
congenial, when compared with model answers fabricated by
human assessors. The algorithm presented here is capable of
generating answers with highest precision depending on the
vocabulary source but some other parameters like context
continuity and context span must be included in order to limit the
locality of context for more accurate results with high recall. The
software testing of the tool seems to provide promising results in
performing fair and unbiased evaluation of students’ answer
scripts. Combining this algorithm with a good answer evaluation
approach can provide robust answer evaluation feature for
automating the digital evaluation systems.</p>
      <p>Another field of this tool application is evaluation of online
assignments at the institute level for analyzing students’ appraisals
on continuous scale. The up gradation scopes of such a tool
development follow with real-time answer generation for different
types of subjective questions presented in wide variety of
grammatical styles and for versatile subject domains.
This work was supported by Research and Development
Laboratory, Department of Computer Science and Engineering at
Bhilai Institute of Technology, Durg, Chhattisgarh, India,
awaiting sponsorship from suitable funding agencies.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Valenti</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Cucchiarelli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2003</year>
          .
          <source>An Overview of Current Research on Automated Essay Grading. J. of Information Technology Education (JITE)</source>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Christie</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>1999</year>
          .
          <article-title>Assessment of Essay Marking - focus on Style and Content</article-title>
          .
          <source>In 3rd International Computer Assisted Assessment Conference (CAA)</source>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Diekema</surname>
          </string-name>
          , Ozgur Yilmazel, and
          <string-name>
            <given-names>E.D.</given-names>
            <surname>Liddy</surname>
          </string-name>
          ,
          <year>2004</year>
          .
          <article-title>Minimal Ownership of Active Objects</article-title>
          .
          <source>In Proceedings of the 6th Asian Symposium on Programming Languages and Systems, APLAS</source>
          <year>2008</year>
          , Bangalore, pp.
          <fpage>139</fpage>
          -
          <lpage>154A</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Parag</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Guruji</surname>
          </string-name>
          ,
          <string-name>
            <surname>Mrunal M. Pagnis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Sayali</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Pawar</surname>
          </string-name>
          and
          <string-name>
            <surname>Prakash J. Kulkarni</surname>
          </string-name>
          , '
          <article-title>Evaluation Of Subjective Answers Using Glsa Enhanced With Contextual Synonymy'</article-title>
          ,
          <source>International Journal on Natural Language Computing (IJNLC)</source>
          Vol.
          <volume>4</volume>
          , No.1,
          <year>February 2015</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Jorg</given-names>
            <surname>Tiedemann</surname>
          </string-name>
          , “
          <article-title>Integrating linguistic knowledge in passage retrieval for question answering</article-title>
          .
          <source>” Proceedings of Conference on Human Language Technology and Empirical Methods in Natural LanguageProcessing</source>
          , Vancouver, British Columbia, Canada, pp.
          <fpage>939</fpage>
          -
          <lpage>946</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>