<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reasoning With Streamed Information from Unreliable Sources</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ladislav Beránek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Radim Remeš</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Applied Mathematics and Informatics, Faculty of Economics, University of South Bohemia, CZECH REPUBLIC</institution>
          ,
          <addr-line>Ceske</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>1</fpage>
      <lpage>3</lpage>
      <abstract>
        <p>At present, we are increasingly struggling with the need to make decisions based on information from data streams from different sources which are often unreliable. When deciding, we need to process the this observed information, and we must estimate their reliability. In this paper, we propose a framework that allows us to derive information from unreliable sources and to estimate their trustworthy. This framework is fully implemented on data streams with the aim to derive of new facts from incoming information. This information is coming as unstructured messages that are transmitted from heterogeneous and potentially untrustworthy sources. This information is processed using a natural language and belief function theory. The trustworthy of processed information is estimated based on their internal conflict. The proposed framework is evaluated using an experiment that quantifies the efficiency of our solution with respect to accuracy and overhead of the proposed framework.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Users share and process all sorts of data within various
applications based on the Internet infrastructure. Users
evaluate various products and express their opinions to
different events. Tripadviser.com server might be an example.
Users can evaluate here certain hotel on the base of their
satisfaction with its services. Wikipedia provides feedback
tool to engage the reader in a review of article quality based
on four criteria, i.e., "trustworthy", "objective", "complete"
and "well written". Such activity is referred also as
crowdsourcing. Many users are used for evaluation or
classification of certain product or services. It is sometimes
used in science. An example might be the website Galaxy
Zoo, where users classify astronomical images.</p>
      <p>As this method is useful, organizers usually have little
control over quality of users’ activity. Reaction of individual
users may vary substantially, and in some cases, they may
even be controversial. The question is then how to integrate
feedback from multiple users to get an objective opinion.
Commonly used heuristics such as "majority voting" and
"take the average" ignore individual user experience and can
fail, for example in an environment where there are users
with malicious intent. The aim of this paper is to propose and
to test a method to determine the grand truth without
knowing the previous experience of users. For this purpose, it
is used an approach based on the Dempster-Shafer theory.</p>
      <p>Within this theory, the operation discounting is defined. At
this operation, the value of belief function varies in
dependence on certain additional information or if the pieces
of information, to be integrated, are contradictory. When it is
necessary to decides to implement discounting process the
following questions are to be solved: What resources are to
be discounted? Up to what extent these resources should be
discounted? The model used in this paper introduces an
iterative method which automatically determines the discount
rate on the base of the reliability of sources. The advantage of
this approach is that it does not require any additional
metainformation about the reliability of sources. The method
assumes only that the more specific source of information
conflicts with the majority opinion, the stronger this source
must be discounted.</p>
      <p>The rest of this paper is organized as follows. Section 2
provides an overview of related work. Section 3 formulates
the problem and introduces a belief function framework with
the proposed model. Section 4 presents experimental results
on synthetic data. Conclusions are composed in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORK</title>
      <p>
        Currently, several studies deal with the setting involving
multiple labelers. For example, the work such as [
        <xref ref-type="bibr" rid="ref17 ref18 ref20 ref4 ref7">4, 7, 9, 19,
20, 22</xref>
        ] focus on the estimating the error rates of observers.
Authors [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] deal with selecting the best set of all available
information from users for model training. These works focus
on learning classifiers directly from user data instead of
estimating ground truth. Work [
        <xref ref-type="bibr" rid="ref13">14, 15</xref>
        ] uses a probabilistic
framework for solving classification, regression and ordinal
regression problem with multiple annotators. This framework
assumes that the expertise of each annotator does not depend
on these data. Works [
        <xref ref-type="bibr" rid="ref21 ref23 ref24">23, 25, 26</xref>
        ] develop this approach, but
do not build fully on this premise. There are some other
related works, which focuses on a different setting [
        <xref ref-type="bibr" rid="ref22 ref3">3, 24</xref>
        ].
Recent work [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] pays attention to regression problem under
multiple observers, with the use of less parametric methods
for modeling and designing observers regression function.
      </p>
    </sec>
    <sec id="sec-3">
      <title>III. METHODOLOGY</title>
      <sec id="sec-3-1">
        <title>Belief function theory framework</title>
        <p>
          Our model is an application of the Dempster-Shafer theory.
The Dempster-Shafer theory [
          <xref ref-type="bibr" rid="ref14">16</xref>
          ] is designed to deal with the
uncertainty and incompleteness of available information. It is
a powerful tool for combining evidence and changing prior
knowledge in the presence of new evidence. The
DempsterShafer theory can be considered as a generalization of the
Bayesian theory of subjective probability.
(1)
(2)
(3)
        </p>
        <p>In the following paragraphs, we give a brief introduction to
the basic notions of the Dempster-Shafer theory (frequently
called theory of belief functions or theory of evidence).</p>
      </sec>
      <sec id="sec-3-2">
        <title>Basic Notions</title>
        <p>
          Considering a finite set referred to as the frame of
discernment Ω, a basic belief assignment (BBA) is a function
m: 2Ω → [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] so that
where m(∅) = 0, see [
          <xref ref-type="bibr" rid="ref14">16</xref>
          ]. The subsets of 2Ω which are
associated with non-zero values of m are known as focal
elements and the union of the focal elements is called the
core. The value of m(A) expresses the proportion of all
relevant and available evidence that supports the claim that a
particular element of Ω belongs to the set A but not to a
particular subset of A. This value pertains only to the set A
and makes no additional claims about any subsets of A. We
denote this value also as a degree of belief (or basic belief
mass - BBM).
        </p>
        <p>
          Shafer further defined the concepts of belief and
plausibility [
          <xref ref-type="bibr" rid="ref14">16</xref>
          ] as two measures over the subsets of Ω as
follows:
        </p>
        <p>Bel( A) = ∑ m(B),</p>
        <p>B⊆ A
Pl( A) =</p>
        <p>∑
B∩ A≠ϕ
m(B).</p>
        <p>A BBA can also be viewed as determining a set of
probability distributions P over Ω so that Bel(A) ≤ P(A) ≤
Pl(A). It can be easily seen that these two measures are
related to each other as Pl(A) = 1 − Bel(¬A). Moreover, both
are equivalent to m. Thus, one needs to know only one of the
three functions m, Bel, or Pl to derive the other two. Hence,
we can speak about belief function using corresponding BBAs
in fact.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Dempster’s rule of combination can be used for pooling</title>
        <p>evidence represented by two belief functions Bel1 and Bel2
over the same frame of discernment coming from
independent sources of information. The Dempster’s rule of
combination for combining two belief functions Bel1 and
Bel2 defined by (equivalent to) BBAs m1 and m2 is defined as
follows (the symbol ⊕ is used to denote this operation):
1
(m1 ⊕ m2 )( A) = ∑ m1(B) ⋅ m2 (C) , (4)
1− k B∩C=A
where
k = ∑ m1(B) ⋅ m2 (C) . (5)</p>
        <p>B∩C=∅</p>
        <p>
          Here k is frequently considered to be a conflict measure
between two belief functions m1 and m2 or a measure of
conflict between m1 and m2 [
          <xref ref-type="bibr" rid="ref14">16</xref>
          ]. Unfortunately, this
interpretation of k is not correct, as it includes also internal
conflict of individual belief functions m1 and m2 [
          <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
          ].
Demspter’s rule is not defined when k = 1, i.e. when cores of
m1 and m2 are disjoint. This rule is commutative and
associative; as the rule serves for the cumulation of beliefs, it
is not idempotent.
        </p>
        <p>∑ m( A) = 1</p>
        <p>A⊆Ω</p>
      </sec>
      <sec id="sec-3-4">
        <title>Belief Function Correction</title>
        <p>When receiving a piece of information represented by a
belief function, some metaknowledge regarding the quality or
reliability of the source that provides the information, can be
available. In the following paragraphs, we describe briefly
some possibilities how to correct the information according
to this metaknowledge.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Discounting</title>
        <p>
          To handle the lower reliability of information sources, a
discounting scheme has been introduced by Shafer [
          <xref ref-type="bibr" rid="ref22">24</xref>
          ]. It is
expressed by equations:
        </p>
        <p>(1−α ) × m( A) if A ⊂ Ω
α m( A) = </p>
        <p>
          α + (1−α ) × m(Ω) if A =Ω
where α∈[
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] is a discounting factor and α m( A) denotes the
discounted mass of m(A). The larger α is, the more masses
are discounted from A ⊂ Ω, while the more mass is assigned
to the frame of discernment Ω.
(6)
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. RESULTS AND DISCUSSION</title>
      <p>The idea of discounting mechanism is a weakening of a
given belief function (BBA). Thus, the principle of the
discounting is transferring of parts of basic belief masses
(BBMs) of all focal elements which are proper subsets of the
frame of discernment to the entire frame. This process is the
result of some additional information saying that the source is
not entirely reliable. The transfer of BBMS from a source to
the framework reflects an increase of the degree of
uncertainty regarding the data that the source produces.</p>
      <sec id="sec-4-1">
        <title>Use of belief function theory for ground truth estimation</title>
        <p>
          Traditional data fusion processing based on
Dempster/Shafer theory consists of obtaining of BBAs due to
some mathematical model in the first step. The second step is
the discounting of some BBAs which we know about that
they are less reliable (6). The final step is the integration of
BBAs using a Demspter’s rule (4) or using some other
suitable combination rule [
          <xref ref-type="bibr" rid="ref10 ref15 ref16 ref19">11, 17, 18, 21</xref>
          ]. As it was
described above discounting process is used when we have
meta-information about the reliability of some contextual
sources of information (BBA) and it is necessary to have
some approach how to express the value of discounting factor
[
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ].
        </p>
        <p>
          In the most cases, the discount rate is adjusted manually,
but some authors have suggested several methods how to
obtain them automatically. In [
          <xref ref-type="bibr" rid="ref16">18</xref>
          ], Smets calculates the
discount factor by minimizing the error function. This
method focuses on the classification of data and requires a set
of labeled data. In [
          <xref ref-type="bibr" rid="ref11">12</xref>
          ], Martin et al. establish the discount
rate evaluation method that is based only on the values of
BBA themselves. Similar approach which is the basis of our
work is presented in [
          <xref ref-type="bibr" rid="ref9">10</xref>
          ].
        </p>
        <p>
          Defining what the majority opinion means within the
Dempster-Shafer theory is not easy. Murphy [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ] for example
suggested using average BBAs and argued that the average
properties are better suited for the fusion of contradictory
evidence:
mmean =
1 M
        </p>
        <p>∑ mi</p>
        <p>M i=1</p>
        <p>
          This opinion is valid considering the fact that if subset s1
from S corresponds to the cluster of concordant BBAs and if
this subset contains more BBAs than any other cluster, then
mmean will probably be closer to BBAs forming the s1. Hence
mmean can be used as an estimate of the majority opinion [
          <xref ref-type="bibr" rid="ref12">13</xref>
          ].
We therefore propose to review the first set of discount
factors by the following way:
        </p>
        <p>
          α i0 = d BPA (mi , mmean )
where dBPA is defined subsequently [
          <xref ref-type="bibr" rid="ref11">12</xref>
          ]:
d BPA (m1, m2 ) = 1/ 2(m1 − m )t D(m1 − m )
2 2
        </p>
        <p>Here, the m is BBA expressed in the form of vector and D
is the matrix which has dimensions 2N × 2N with elements
D(A,B) = |A∩B|/|A∪B|.</p>
        <p>Equation (10) gives low values of discounting factor for
BBAs near to the mean (they are in accordance with the
opinion of the majority) and a high degree of discounting
factor for BBAs that differ considerable from the mean (the
ones that are the cause of disagreement).</p>
        <p>In this paper, we use an iterative method for calculating of
discounting factors. In the first step, discounting factors are
calculated for each member of the initial settings using
equation (8). Then this iteration process is applied on the
BBAs set S1. New values of discounting factors are obtained.
This iteration is repeated and the value of discounting factors
increases but more and more slowly. To determine the
optimal set of discount factors among those computed at each
iteration step a posteriori analysis is employed.</p>
        <p>We investigate the conjunctive combinations obtained at each
step and compare them with categorical BBAs by distance
dBPA. Iteration that gives minimum distance is optimal
number of iteration iopt.</p>
        <p>Relative values of discount factors in single steps affect the
result of the result of information fusion process as much as
the absolute value. In other words, it is not sufficient to have
a high degree of value on unreliable sources, it is also
necessary that the measure of the difference between reliable
and unreliable sources be large enough. Therefore, we
perform the optimum setting of values α i using iteration. We
calculate a discounting factor of the initial set of BBAs and
then recalculate new values of BBAs of this set. This process
is repeated as described in the previous paragraph.
Consecutive values of discount factors are calculated by these
iterations process and are further analyzed to determine the
best setting according to the predefined criteria which is
minimum distance.</p>
        <p>An iterative procedure involves the gradual discounting the
α 0 ,α 1
original BBAs. The term m indicates BBA discounted
value of α1. Successive values of discounting factors {α0, …,
αK} can be summarized:</p>
        <p>K 
α K = 1 − ∏ (1 −α i ) </p>
        <p>i=0 
α K = β K −1(1 −α K ) +α K 
(10)
(9)
Stop condition is distance dBPA. Iteration that gives
minimum distance is optimal number of iteration iopt.
Important here is that we can also find a source that differs
mostly from the average value. It may be omitted from the
calculations and it may be explored independently. The
advantage of this described approach is that it does not need
any meta-information about the reliability of sources.
The responses of various sources (observers) are
represented by the values of belief functions in Table 1. The
six different sources are modeled (m1 – m6). Ground truth has
the same values as the values m1(⋅). The value of m*(⋅) is
calculated using equation (4). The value of m(⋅) in the last but
one row of the table is calculated according to the process
outlined in the previous section. Source 4 (m4) is modeled as
adversarial, because its reaction is opposite to the ground
truth. The discount factor calculated for this source reaches
the highest values. The table shows that discounting process
overrides the impact of this source and as a result the result of
the integration of information sources will be close to grand
truth (m*).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>V. CONCLUSION</title>
      <p>This article examines the problem of multiple observers
which provide answers that are not entirely accurate. The
problem concerns the use of model that is based on belief
function theory and no additional information about the
reliability of observers are known. Our approach provides an
estimate of the ground truth and predicts the response of each
observer of the new instance. Experiments show that the
proposed method outperforms several core values and leads
to a performance close to the model trained with ground
truth. There are many opportunities for further research. One
possible direction is to extend our model with more cores
learning. The aim is to choose an algorithm or a composite
different covariance functions instead of fixing the
combination in advance. Consequently, the algorithm may be
difficult to learn fits observer selecting multiple cores in
datadependent manner. In addition, it would be very useful to
design efficient sampling methods for selection that instance
and the response should be taught more. Our aim is to test
further the described algorithm on real data and further to
verify the model described in this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Beranek</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Nydl</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <article-title>The Use of Belief Functions for the Detection of Internet Auction Fraud</article-title>
          .
          <source>In: Proceedings of the 31st International Conference Mathematical Methods in Economics</source>
          <year>2013</year>
          . Jihlava: College of Polytechnics Jihlava,
          <year>2013</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>36</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Beranek</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Knizek</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>The Usage of Contextual Discounting and Opposition in Determining the Trustfulness of Users in Online Auctions</article-title>
          .
          <source>Journal of Theoretical and Applied Electronic Commerce Research</source>
          , Vol.
          <volume>7</volume>
          , No.
          <volume>1</volume>
          ,
          <issue>2012</issue>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Zhang, J.,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <surname>C.</surname>
          </string-name>
          ,
          <article-title>What if the irresponsible teachers are dominating?</article-title>
          <source>In Proc. 24th AAAI</source>
          <year>2010</year>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Crammer</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kearns</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wortman</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <source>Learning from multiple sources. JMLR</source>
          , Vol.
          <volume>9</volume>
          , No.
          <volume>4</volume>
          ,
          <issue>2008</issue>
          , pp.
          <fpage>1757</fpage>
          <lpage>1774</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Daniel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Several Notes on Belief Combination,
          <source>In Proceedings of the Theory of Belief Functions Workshop. Brest: ENSIETA</source>
          ,
          <year>2010</year>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Daniel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <article-title>Conflicts within and between Belief Functions</article-title>
          .
          <source>In Proceeding from IPMU 2010 - Computational Intelligence for Knowledge-Based Systems Design</source>
          , Berlin,
          <year>2010</year>
          , s.
          <source>696-705, Lecture Notes in Artificial Intelligence</source>
          .
          <volume>6178</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Dawid</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skene</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Maximum likelihood estimation of observer error-rates using the EM algorithm</article-title>
          .
          <source>Applied Statistics</source>
          , Vol.
          <volume>28</volume>
          , No 1,
          <year>1979</year>
          , pp.
          <fpage>20</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eckert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <article-title>Learning from Multiple Observers with Unknown Expertise</article-title>
          ,
          <source>In Proceedings of 17th Pacific-Asia Conference on Knowledge Discovery and Data Mining</source>
          , Gold Coast, Australia,
          <year>2013</year>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>241</lpage>
          [9]
          <string-name>
            <surname>Hui</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Walter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <article-title>Estimating the error rates of diagnostic tests</article-title>
          .
          <source>Biometrics</source>
          , Vol.
          <volume>5</volume>
          , No ,
          <year>1980</year>
          , pp.
          <fpage>167171</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Colot</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <article-title>Automatic discounting rate computation using a dissent criterion</article-title>
          ,
          <source>In Proceedings of the Theory of Belief Functions Workshop. Brest: ENSIETA</source>
          ,
          <year>2010</year>
          . pp.
          <fpage>151</fpage>
          -
          <lpage>156</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Lysek</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stastny</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motycka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Object Recognition by Means of Evolved Detector and Classifier Program</article-title>
          .
          <source>In MENDEL</source>
          <year>2012</year>
          , 18th International Conference on Soft Computing. Brno University of Technology,
          <year>2012</year>
          , p.
          <fpage>82</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jouselme</surname>
            ,
            <given-names>A.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osswald</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <article-title>Conflict measure for the discounting operation on belief functions</article-title>
          .
          <source>In IEEE Int. Conf. on Information Fusion FUSION</source>
          <year>2008</year>
          , Madrid,
          <year>2008</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Murphy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <article-title>Combining belief functions with evidence conflicts</article-title>
          .
          <source>Decision Support Systems</source>
          , Vol.
          <volume>29</volume>
          , No.
          <volume>4</volume>
          ,
          <issue>2000</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Raykar</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jerebko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Florin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valadez</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogoni</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moy</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <article-title>Supervised learning from multiple experts: Whom to trust when everyone lies a bit</article-title>
          .
          <source>In Proc. 26th ICML</source>
          <year>2009</year>
          ,
          <year>2009</year>
          , pp.
          <fpage>889</fpage>
          -
          <lpage>896</lpage>
          . ACM (
          <year>2009</year>
          ) [15]
          <string-name>
            <surname>Raykar</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valadez</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Florin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogoni</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moy</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <article-title>Learning from crowds</article-title>
          .
          <source>JMLR</source>
          , Vol.
          <volume>11</volume>
          , No.
          <volume>2</volume>
          ,
          <issue>2010</issue>
          , pp.
          <fpage>1297</fpage>
          -
          <lpage>1322</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Shafer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <source>A mathematical theory of evidence</source>
          , Princeton University Press, Princeton,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Schubert</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>Conflict management in dempster- theory by sequential discounting using the degree of falsity</article-title>
          .
          <source>In Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU</source>
          <year>2008</year>
          , Madrid,
          <year>2008</year>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>305</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Smets</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <article-title>Analyzing the combination of conflicting belief functions</article-title>
          .
          <source>Information Fusion</source>
          , Vol.
          <volume>8</volume>
          , No.
          <volume>3</volume>
          ,
          <issue>2006</issue>
          , pp.
          <fpage>387</fpage>
          -
          <lpage>412</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Smyth</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fayyad</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perona</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baldi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <article-title>Inferring ground truth from subjective labelling of venus images</article-title>
          .
          <source>In Proc. 9th NIPS</source>
          <year>1995</year>
          ,
          <year>1995</year>
          , pp.
          <fpage>1085</fpage>
          -
          <lpage>1092</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Spiegelhalter</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stovin</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <article-title>An analysis of repeated biopsies following cardiac transplantation</article-title>
          . Statistics in Medicine, Vol.
          <volume>2</volume>
          , No.
          <volume>1</volume>
          ,
          <issue>1983</issue>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Stencl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stastny</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Neural network learning algorithms comparison on numerical prediction of real data</article-title>
          .
          <source>In MENDEL</source>
          <year>2010</year>
          , 16th International Conference on Soft Computing. Brno University of Technology,
          <year>2010</year>
          , p.
          <fpage>280</fpage>
          -
          <lpage>285</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Tubaishat</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <article-title>Sensor networks: an overview</article-title>
          .
          <source>Potentials. IEEE</source>
          , Vol.
          <volume>22</volume>
          , No.
          <volume>2</volume>
          ,
          <issue>2003</issue>
          , pp.
          <fpage>20</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Whitehill</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruvolo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergsma</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Movellan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <article-title>Whose vote should count more: Optimal integration of labels from labelers of unknown expertise</article-title>
          .
          <source>In Proc. 23rd NIPS</source>
          <year>2009</year>
          , vol.
          <volume>22</volume>
          , pp.
          <fpage>2035</fpage>
          -
          <lpage>2043</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Learning to rank under multiple annotators</article-title>
          .
          <source>In: 22nd IJCAI</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1161</fpage>
          -
          <lpage>1168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosales</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fung</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dy</surname>
          </string-name>
          , J.:
          <article-title>Active learning from crowds</article-title>
          .
          <source>In: Proc. 28th ICML</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1161</fpage>
          -
          <lpage>1168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosales</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fung</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hermosillo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogoni</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moy</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malvern</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <article-title>Modeling annotator expertise: Learning when everybody knows a bit of something</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          , Vol.
          <volume>9</volume>
          ,
          <issue>2010</issue>
          , pp.
          <fpage>932</fpage>
          -
          <lpage>939</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>