<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Uncertainty Quantification Algorithm for Performance Evaluation in Wireless Sensor Network Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dong YU</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Technology</institution>
          ,
          <addr-line>Sydney</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <fpage>12</fpage>
      <lpage>23</lpage>
      <abstract>
        <p>The potential applications of Wireless Sensor Networks (WSNs) span a very wide range. One of the application domains is in healthcare industry. The diverse WSN application requirements signify the need of application specific methodology for system design and performance evaluation. Moreover the performance of typical wireless network is stochastic in nature, Probability is an essential instrument needed to assess the performance characteristics. Before WSNs widespread involvement in life or death critical applications, an urgent need is a generic systematic evaluation methodology for decision makers to evaluate performance among alternative solutions taking into account the cohesion characteristic. This paper offers a quantative decision making procedure to incorporate performance deviation as a target performance metric. Decision making is guided by goals and objectives for the particular application specified by application domain experts.</p>
      </abstract>
      <kwd-group>
        <kwd>Uncertainty Quantification</kwd>
        <kwd>WSNs</kwd>
        <kwd>Multi Criteria</kwd>
        <kwd>Statistical Performance</kwd>
        <kwd>Generic Methodology</kwd>
        <kwd>Fair comparison</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        We have witnessed in recent years the emergence of WSNs in healthcare. These
applications aim to improve and expand the quality of care across wide variety settings
and for different segments in the healthcare system. They range from real-time
patient monitoring in hospitals, emergency care in large disasters through automatic
electronic triage, improving the life quality of the elderly through smart environments,
to large-scale field studies of human behavior and chronic diseases [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, the
barrier for wide spread adoption of the technology is still high. Fulfilling the potential
of WSNs in healthcare requires addressing a multitude of technical challenges. These
challenges reach beyond the resource limitations that all WSNs face in terms of
limited network capacity and energy budget, processing and memory constraints.
Particularly, healthcare applications impose stringent and diverse requirements on system
response time, reliability, quality of service, and security.
The uniqueness of WSNs in its resources limitation, transient channel state and drastic
different application requirements, brings in application specific system design
methodology. From WSNs research kickoff at early stage, research efforts are mostly
focused on the isolated programming issue of single layer protocols with little concern
of other layer functionality. This leads to protocols that exist in a vacuum which
perform well on theoretical basis, but have problems when deployed under real-life
circumstances [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Many of these protocols are further validated using ad-hoc
experimental tests to the benefit of one specific solution, leaving little room for objective
comparison with other protocols. Up to now, there exists no fixed set of accepted
testing methods, scenarios, parameters or metrics to be applied to guarantee fair
comparison between competing solutions. This lack of standardization significantly
increases the difficulty for a developer to assess the relative performance of their
protocols compared to the current state of the art.
      </p>
      <p>
        Essentially, multiple-criteria evaluation is a well studied realm. There exist plenty of
methodologies in multi criteria decision making domain [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. But we can not apply
these techniques directly to WSNs evaluation without considering the uniqueness of
the target domain. In this paper, we try to apply analytic hierarchy process (AHP) to
WSNs performance evaluation. It is a method to evaluate system performance
according to application scenario and application expert preference, it is generic enough to
provide a platform to fairly compare alternative solutions. Most importantly, we
introduce statistical metrics to reflect uncertainty attribute impact on final performance.
The rest of the paper organized as such: Section 2 establishes a background
understanding of performance uncertainty in WSNs that serves as the foundation for
building our proposed solutions. The section presents uncertainty attribute of WSN
performance, statistical concepts in performance evaluation. Section 3 introduces
application specific evaluation based on AHP method. We emphasize that while
application specific design is necessary to build efficient application based on limited energy
budget, for a fair comparison of alternative design solutions, a generic evaluation
algorithm is needed to deal with all of the components aforementioned. Section 4
provides the practical algorithm for uncertainty performance evaluation. Workflow
and algorithm are given to get a single QoS performance index. We summarize our
work in section 5.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Uncertainty Performance in WSNs</title>
      <p>Source of Uncertainty of WSN performance
Several factors contribute to the fact that wireless sensor networks often do not work
as expected when deployed in a real-world setting. Firstly, there is possibility of
wrong expectation from system designer side: analytical model does not fit into the
problem in hands. That is often a problem for inexperienced designers. For all
simulation or other experiments methods, first step is to eliminate the possibility of this kind
of profound design problem in preliminary stage. Secondly, there is possible wrong
expectation from simulation results: Simulation modeling can not faithfully reflect the
System Under Test (SUT).</p>
      <p>
        Except the designer’s preliminary problem of analytical model mismatching design
target, we can further identify fault point of performance disagreement between
expectation and real world implementation into components of WSNs hierarchy [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
1. Environmental influences which may lead to non-deterministic behavior of radio
transmission.
2. Node level problem: Malfunction of the sensors, or even the complete failure of a
sensor node due to cheap manufacturing cost. Scarce resources and missing
protection mechanisms on the sensor nodes may lead to program errors: operating system
reliability and fault-tolerance.
3. Network level problem: Network protocols especially (MAC and Routing) are not
robust to link failure, contention, topology dynamics.
4. Unforeseen traffic load pattern: A common cause for network problems is an
unforeseen high traffic load. Such a traffic burst may occur for example, when a
sensor network observes a physical phenomenon, and all nodes in the vicinity will try
to report at once, causing the occurrence of packet collisions combined with a high
packet loss.
      </p>
      <p>
        All these factors contribute to the uncertainty of the sensor network behavior and
function. These elements increase the probability of network functionality deviation
from its normal operation and affects its' collected data accuracy.
In order to effectively develop parameters, we in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] congregate hierarchical possible
points of deviation into four groups.
1. Spatial elements uncertainty: include site specific characteristic: fading, signal
attenuation, interferences, and network scale: topology, network size and density.
2. Temporal elements uncertainty: even on one particular spot, link state flips with
time.
3. Data communication pattern uncertainty: include load burst pattern uncertainty
(The volume, frequency of the data burst), communication interval difference (how
often is the data communication happening, how long is the interval between two
adjacent communications), and different communication modes (inquiry triggered,
regular report, or event triggered communication).
4. Algorithm internal programming uncertainty: include malfunctioned models,
assumption realization problem and other normal cooperation problem in
programming,
We summarize above observation into Equation (1).
      </p>
      <p>P = F (spatial, temporal, traffic load pattern, SUT)
(1)</p>
      <p>
        The parameters in equation (1) as show in Figure 1 represent system level
performance dynamics involving all four factors. As the input elements display statistical
behaviors, output performance definitely will have a statistical distribution pattern
with certain norm and deviations for specific scenarios. Since wireless performance is
inherently statistical in nature, accurate performance testing must account for these
random components [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. More over, comparing performance curves produced by a
number of metrics makes it difficult to evaluate how well a given protocol suits for
the purpose of an application. It may also be difficult to estimate, which of the
protocols at hand would perform the best with respect to that application [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
2.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Characterizing WSNs Uncertainty Performance with Statistical Concepts.</title>
      <p>WSNs sense the targeted phenomenon, collect data and make decision to store,
aggregate or send the data according to distributed local algorithm. The modulated
electromagnetic waves propagate in free-space; interact with the environment through
physical phenomenon such as reflection, refraction, fast fading, slow fading,
attenuation and human activities. Even with the best wireless protocols, the best chipsets, the
best RF design, the best software, wireless performance is going to vary. Wireless
performance is inherently statistical in nature, and accurate performance evaluation
must reflect this nature.</p>
      <p>
        We observed that currently most ad hoc evaluations in wireless network field,
especially in WSNs research, no matter in the form of test bed experiment or simulation,
only focus on mean value of performance metrics, and do not pay much attention on
performance deviation. For some applications, average performance is sufficient for
data gathering and collective data analysis. However, average ‘throughput’, ‘lifetime’,
‘reliability’ or ‘delay response’ are not sufficient enough to predict performance on
certain application scenarios. Any dip in performance, no matter how short, can result
in dropped packets that cause visual artifacts or pixilation of the image of wireless
video monitoring application. In extreme cases like in healthcare emergency
monitoring application, any dropped packet may cause life or death difference. Consequently,
the viewer/user experience is completely dependent on the wireless system
‟worstcase‟ performance [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Figure 2 represents Probability Density Function‟ (PDF) of a sample performance
metric with four different Systems Under Test (SUT), each with different average and
standard deviation (variability) parameters. The graph illustrates „normal‟ probability
distribution revealing statistical characteristics of metric X, representing at least
approximately, any variable that tends to cluster around the mean as norm , shown as,
but not necessary, the familiar „bell curve‟. It shows the relative probabilities of
getting different values. It answers the question:
 What is the chance I will see a certain result?
 What is the mean value or norm of the respective SUT at this specific performance
metric?
 How cohesive and stable is the performance for each SUT?</p>
      <p>Examining the random process represented by the red curve in figure 3, we would
expect outcomes with a value around „0‟ to be twice as prevalent as outcomes of
around 1.25 (40% versus 20%). However, in some cases, we are more interested in a
threshold performance value as benchmark value than individual probability point,
what is the probability of having performance being less or greater than a threshold
value? A transformed PDF ,Cumulative Distribution Function (CDF) (Figure 3.) helps
answer this question.</p>
      <p>
        When you use probability to express your uncertainty, the deterministic side has a
probability of one (or zero), while the other end has a flat (all equally probable)
probability. For example, if you are certain of the occurrence (or non-occurrence) of an
event, you use the probability of one (or zero). If you are uncertain, and would use the
expression “I really don't know,” the event may or may not occur with a probability of
50 percent. This is the Bayesian notion that probability assessment is always
subjective. That is, the probability always depends upon how much the decision maker
knows. Due to statistics science the quality of information and variation are inversely
related. That is, larger variation in data implies lower quality data (i.e., information)
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Examining the random performance metric represented by the red curve in figure 3,
 The probability of producing a result less than -0.75 is about 20%.
 The probability of producing a result less than 0 is 50% and
 The probability of producing a result less than 2 is about 95%.
      </p>
      <p>
        To characterize wireless performance, CDF graphs can provide immensely useful
information for system designers to compare performance of alternative solutions.
Ideally, we prefer better mean value and smaller deviation, but if the ideal choice is
unavailable, we have different optional solution to choose. All the choices should be
put in application specific context to choose the right protocol for right application.
The principle is:
─ Drastic different mean value, if mean value represent positive metrics, like
throughput or lifetime, bigger is better, ideally we prefer SUT with bigger mean
value and less deviation.
─ Same mean value, different deviation: long tail means performance not stable, we
prefer smaller deviation.
─ Slightly different mean value, but one with long tail, we prefer stable over slightly
improved peak performance.
─ But if the optimal solution is not available, we have choice over performance
stability and higher norm performance according to different application scenario.
 (option 1) Higher performance potential but less predictable performance
 (option 2) Less performance potential but higher stable performance
Reference [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presents practical guidelines on how to actually acquire the statistical
performance PDF and CDF curve of a SUT; nevertheless sampling is the key to
recover statistical performance and drawing the PDF and CDF curve of a wireless
system. Furthermore, to predict real-life performance accurately, researchers ideally
should conduct sampling tests across all relevant dimensions and variable if possible.
However, in most cases, the design space is too big to exhaustively investigate all
factors influencing the final performance. But planners must at least consider three
rough dimensions, as we have mentioned above, to characterize wireless performance
accurately: time, space, and data pattern. Under each category, there are vast known
or unknown parameters that can affect the performance. Hence it is worthwhile to
investigate the effect of parameter change to specific performance metric (sensitivity
analysis). The effective way to deal with the vast design space is parameter reduction
and inter-dependency analysis.
3
      </p>
    </sec>
    <sec id="sec-4">
      <title>Application Specific Evaluation Based on AHP Method</title>
      <p>WSNs energy-oriented research originates from conflict between application
performance requirements and limited energy supply by battery. In foreseeable future, it
will remain as a bottleneck for its widespread development unless a breakthrough at
relevant material science field occurs. However, we can not overemphasize energy
conservation while ignoring application specific requirements. To what extent we
should emphasize the importance of energy aspect comparing other QoS objectives
depends on application scenarios. Tradeoffs have to be made on per application basis.</p>
      <p>Typically WSN lifetime (energy efficiency), response time (delay), reliability, and
throughput are among the main concerns in the design and evaluation process. Under
the constraint of wireless sensor node size, the energy budget and computing
resources are unfeasible to afford any luxury algorithms. Under such constraint, there
does not exist a perfect optimal solution satisfying all performance metrics in the
problem (NP hard problem), rather, the question we sought is how to tradeoff multiple
criteria explicitly leading to more informed and better decisions. The methodology
should be general enough to contain different application scenario according to
decision maker’s preferences.</p>
      <p>
        There have been few works on application-driven protocol stack evaluation for
WSNs. Our evaluation methodology, similar to analytic hierarchy process (AHP) [
        <xref ref-type="bibr" rid="ref10 ref9">9,
10</xref>
        ], using a Single Performance Index (SPI) for each alternative solution or System
Under Test (SUT), as the final quantified goodness measurement for alternative
solutions comparison.
      </p>
      <p>The end-user configures the relative importance of the individual design metrics in
a function that maps the individual metric values to a single SPI value. Firstly we
define the default overall objective function as a weighted sum of the individual
design metric normalized values as other AHP methodologies normally do: separation
of defining design metric, and weighing those functions’ importance in an overall
objective function.</p>
      <p>SPI norm = a * m (L) + b * m (R) + c * m (T);
(2)</p>
      <p>Here (a, b, c) represent corresponding weight of performance metrics such as
lifetime (L), reliability(R) and timeliness (T). (m (L), m (R), m (T)) represents the mean
value of multiple measurement of corresponding metric. A key feature of our
approach is that, we introduce the statistical analysis of the resulting experiment data,
not only using measurement mean value as supposed normalized value (which is not
realistic representation of the dynamic truth of the wireless network nature), we
introduce deviation of performance measurement PDF as a critical secondary performance
metric to emphasis the importance of performance stability and cohesion. Even a
higher mean performance metrics, if the performance spreads over a wide spectrum of
measurement, not cohesive to so called norm performance (mean) value, it will be
problematic for certain application scenarios which require consistent performance ,
such as health monitoring application and multimedia application. We introduce
stability performance index, as:</p>
      <p>SPI stability = a ' * (1/δ 2 (L)) + b ' *(1/ δ 2 (R)) + c ' * (1/δ 2 (T))
(3)</p>
      <p>Here (a’, b’, c’) indicates the relative importance of the metrics cohesive
characteristic of metrics (L, R, T) represented by deviation (1/δ 2 ). So overall we have:
n
SPI= SPI norm + SPI stability =  (Wi * Metrici (mean)  Wi' * (1/ i2 )) (4)
i1</p>
      <p>Here ‘n’ represent the number of metrics considered, Wi = (a, b, c…) represents
respectively the user specified relative importance of the performance metrics. And
W '</p>
      <p>i = ( a’, b’, c’ …) indicates the relative importance of the metrics cohesive
characteristic. The relative importance of each design metric as weight is assigned by
considered application specific scenarios, how important in your application is certain
metric (network lifetime, reliability, throughput, delay, etc ) respectively? How
important is cohesive characteristic of performance to your application? Which metric is
utmost important for you?
4</p>
    </sec>
    <sec id="sec-5">
      <title>The Algorithm Proposed</title>
      <p>System evaluation process starts with the end users as application experts who know
very well what kind of performance they needs; they specify the most concerned QoS
performance metrics, and the weights of each metric.</p>
      <p>The WSNs designers decide the initial parameters according to the literature studies
and previous experiment experience. Then for each performance metric start the
iterative experiment process as such:</p>
      <sec id="sec-5-1">
        <title>1. Parameters significance analysis for metrici .</title>
        <p>Repeat l experiment measurements, record each experiment the state of each
parameter xi as f ji f (1&lt;i&lt;n, 1&lt; j&lt;l) and corresponding performance measurement
 j (1&lt;j&lt;l). Then use linear aggregation and P-value to decide significant parameters
to metrici .
2. Design space reduces from n parameters to m parameters for metrici .</p>
      </sec>
      <sec id="sec-5-2">
        <title>3. m parameters interaction analysis for metrici .</title>
        <p>Tune the parameters based on the reduced parameters set, Repeat l experiment
measurements, record the state of each parameter xi as f ji (1&lt;i&lt;n, 1&lt; j&lt;l) and
corresponding performance measurement  j (1&lt;j&lt;l). Then use the Choquet nonlinear
aggregation model as described in later chapter to decide the most effective
parameters set including interaction effect of individual parameter.
4. Now we have finally approach the effective parameters set. Tune the effective
parameters set, repeat measurement and get the performance curve, get the
(metrici( 2)) and metrici (mean) for metrici .
7. For competing solution for pair wise comparison, repeat the above process and get</p>
        <p>SPI value and compare:</p>
        <sec id="sec-5-2-1">
          <title>If system1 SPI &gt; system2 SPI then system1 perform better than system 2</title>
          <p>Notice that we can setup threshold value as prerequisite filter for minimum
requirement, any time if Metrici mean or deviation is less than the threshold value, the
candidate solution is not qualified for further comparison due to unsatisfactory for
minimum user specification.</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>Prerequisite Filter:</title>
          <p>If {
every metrici (mean)&gt; threshold(mean)
And metrici( 2)&lt; threshold( 2)
}</p>
        </sec>
        <sec id="sec-5-2-3">
          <title>Then{</title>
          <p>Single Performance Index: SPI=
n
(Wi *Metrici(mean) Wi' *(1/i2))
i1
)</p>
        </sec>
        <sec id="sec-5-2-4">
          <title>Else</title>
          <p>SPI=0 (not satisfy minimum requirement,</p>
        </sec>
        <sec id="sec-5-2-5">
          <title>Not qualified for comparison)</title>
        </sec>
        <sec id="sec-5-2-6">
          <title>Return</title>
          <p>c ' *(1/ δ 2 (R))
…….</p>
          <p>and corresponding stability
indicator:</p>
          <p>Prerequisite Filter:
er
1. Repeat above procedure for any
oth</p>
          <p>SUT alternatives,
2. Pairwise Comparison:
If SPI system1 &gt; SPI system2
3. Conclusion
System1 perform better than system 2
Under user specified requirement.</p>
          <p>Initial parameters
Development
Parameter significance
analysis for metrici</p>
          <p>Design space reduction
for metrici</p>
          <p>Parameters
analysis for metrici</p>
          <p>interaction</p>
          <p>Repeat experiment and
measurement in effective
design space for metrici</p>
          <p>PDF /CDF
for metrici :</p>
          <p>
            acquisition
metrici ( 2 ) ;
metrici (mean)
In this paper, we introduce a procedure to evaluate WSNs application performance
according to application scenarios, in our approach, uncertainty attributes contribute
to final performance index. Our future research direction is how to capitalize data
mining technique to further dig deep experiment data and distil invaluable collective
information from randomness. “Large-scale random phenomena in their collective
action create strict, nonrandom regularity” [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ].
          </p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ko</surname>
            ,
            <given-names>B.J.G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Srivastva</surname>
            ,
            <given-names>M.B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Stankovic</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Terzis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Welsh</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Wireless Sensor Network for Healthcare</article-title>
          .
          <source>Proc. IEEE</source>
          <year>2010</year>
          ,
          <volume>98</volume>
          ,
          <fpage>1947</fpage>
          -
          <lpage>1960</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>M.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Voigt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Saif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dunkels</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Langendoen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Polastre</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Uzmi</surname>
          </string-name>
          , “
          <article-title>Medium access control issues in sensor networks</article-title>
          ,
          <source>” SIGCOMM</source>
          <year>2006</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>36</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>G.</given-names>
            <surname>Barrenetxea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ingelrest</surname>
          </string-name>
          , G. Schaefer, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Vetterli</surname>
          </string-name>
          , “
          <article-title>The Hitchhiker‟s Guide to Successful Wireless Sensor Network Deployments</article-title>
          .” in SenSys,
          <year>2008</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nanda</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          “
          <article-title>Performance Uncertainty Impact on WSNs Design Evaluation”</article-title>
          .
          <source>IEEE Conference on Control Engineering and Communication Technology (ICCECT2012)</source>
          ,
          <fpage>7</fpage>
          -
          <lpage>9</lpage>
          December 2012. Shenyang, China. Accepted.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>5. The Ruckus white paper, available at: http://www.ruckuswireless.com/whitepapers/preview/wireless-network-performance</mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J.</given-names>
            <surname>Haapola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martelli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomalaza-R´</surname>
          </string-name>
          aez, “
          <article-title>Application-driven analytic toolbox for wsns,” in 8th International Conference on Ad-Hoc Networks and Wireless (ADHOCNOW)</article-title>
          ,
          <source>LNCS 5793</source>
          , pp. pp.
          <fpage>112</fpage>
          -
          <lpage>125</lpage>
          ,
          <year>September 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>K.</given-names>
            <surname>Khalili Damghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Taghavifard</surname>
          </string-name>
          , R. Tavakkoli Moghaddam, “
          <article-title>Decision Making Under Uncertain and Risky Situations”</article-title>
          ,
          <source>Enterprise Risk Management Symposium Monograph Society of Actuaries - Schaumburg</source>
          , Illinois,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J.</given-names>
            <surname>Haapola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martelli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomalaza-R´</surname>
          </string-name>
          aez, “
          <article-title>Application-driven analytic toolbox for wsns,” in 8th International Conference on Ad-Hoc Networks and Wireless (ADHOCNOW)</article-title>
          ,
          <source>LNCS 5793</source>
          , pp. pp.
          <fpage>112</fpage>
          -
          <lpage>125</lpage>
          ,
          <year>September 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>T.</given-names>
            <surname>Saaty</surname>
          </string-name>
          , The Analytic Hierarchy Process. New York:
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Gnedenko</surname>
          </string-name>
          and
          <string-name>
            <surname>A. N.</surname>
          </string-name>
          <article-title>Kolmogorov Limit distributions f or sums of independent random variables</article-title>
          . .
          <string-name>
            <surname>Trans</surname>
          </string-name>
          , and
          <string-name>
            <surname>annotated by K. L. Chung</surname>
          </string-name>
          . Cambridge, Addison-Wesley,
          <year>1954</year>
          .
          <volume>264</volume>
          +9 pp.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nanda</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          “
          <article-title>Performance Uncertainty Impact on WSNs Design Evaluation”</article-title>
          .
          <source>IEEE Conference on Control Engineering and Communication Technology (ICCECT2012)</source>
          ,
          <fpage>7</fpage>
          -
          <lpage>9</lpage>
          December 2012. Shenyang, China
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Köksalan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wallenius</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Zionts</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Multiple Criteria Decision Making: From Early History to the 21st Century</article-title>
          . Singapore: World Scientific
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>