<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hussaini Abubakar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Haruna Danyaya Abubakar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aminu Salisu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Statistics, Hussaini Adamu Federal Polytechnic kazaure</institution>
          ,
          <addr-line>Jigawa</addr-line>
          ,
          <country country="NG">Nigeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Science Laboratory Technology, Hussaini Adamu Federal Polytechnic kazaure</institution>
          ,
          <addr-line>Jigawa</addr-line>
          ,
          <country country="NG">Nigeria</country>
        </aff>
      </contrib-group>
      <fpage>49</fpage>
      <lpage>53</lpage>
      <abstract>
        <p>- Agriculturalists seek general explanations for the variations in agricultural yields in response to a treatment. increasingly popular solution is the powerful statistical technique one way analysis of variance (ANOVA). This technique is intended to analyze the variability in data in order to infer the inequality among population means. After exploring the concept of the technique, the response of the chlorophyll content on the leaves of 160 maize seedlings to the treatment of nitrogen potassium phosphorous (NPK) at 0g, 5g, 10g and 20g as control treatment, treatment1, treatment2 and treatment3 respectively, it was revealed that, there was a significant effect of the amount of NPK on the chlorophyll content of maize seedlings at P &lt; 0.05, [F (3, 141) = 51.190, P = 0.000]. Post hoc comparison using tukey HSD test indicated that, the mean score for treatment1 (M = 18.89, SD = 11.58) was significantly different than treatment3 (M =1.61, SD = 7.01) and the control treatment (M = 4.59, SD = 5.49), also the mean score for treatment2 (M =21.57, SD = 9.80) was significantly different than treatment3 and the control treatment respectively. However, the result indicated a non-significant difference between the treatments 1 and 2 and treatments 3 and the control treatment respectively.</p>
      </abstract>
      <kwd-group>
        <kwd>- one-way ANOVA test</kwd>
        <kwd>multiple comparison</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Altogether the result revealed that the amount of NPK
really do have effect on the chlorophyll content of maize
seedlings.</p>
      <p>The
data
were analyzed
using computer
program SPSS.
tests, NPK, chlorophyll, SPSS.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The concept of analysis of variance (ANOVA) was
established by the British geneticist and statistician sir
R. A. Fisher in 1918 and formally published in his book
“statistical methods for workers” in 1925. The technique
was developed to provide statistical procedures for test
of significance for several group means. ANOVA can
be conceptually viewed as an extension of the two
independent samples t-test to multiple samples t-test, but
results in less type 1 error and therefore suited a wide
range of practical problems. Formerly, this idea was
generally used for agricultural experiments, but is
Copyright held by the author(s).
presently the</p>
      <p>most commonly advanced research
method in business, economic, medical and social
science disciplines.</p>
      <sec id="sec-2-1">
        <title>ANOVA</title>
        <p>assumptions:
Like
many
other parametric statistical techniques,
based
on the following
statistical
a)
b)</p>
        <p>Homoscedasticity (homogeneity) of variance.</p>
        <p>Normality of data.
c) Independence of observations.</p>
        <p>2. Basic concepts of one way ANOVA test.
A one-way analysis of variance is used when the data
are divided into groups according to only one factor.
Assume that the data  11,  12,  13, . . .,  1 1are sample
from population 1,  21,  22,  23, . . .,  2 2 are sample
from
population</p>
        <p>2,
sample from population k. Let  
the ith group (level) and jth observation.
,   1,   2,   3, . . .,</p>
        <p>are
denote the data from
We
have
values</p>
        <p>of independent normal random
variables  
mean   and
= 1, 2, 3, … ,  and J = 1, 2, 3, …,   with
constant standard
deviation  ,   ~ N
(  ,  ) Alternatively, each  
=   +   where   are
normally distributed independent random errors,   ~ N
(0,  ). Let N =  1 +  2 +  3+ . . . +  is the total
number of observations (the total sample size across all
groups), where   is sample size for the ith group.</p>
        <p>The parameters of this model are the population
the
common
standard
means  1,  2,   and
deviation  .</p>
        <p>Using many separate two-sample t-tests to compare
many pairs of means is a bad idea because we don’t get
a p-value or a confidence level for the complete set of
comparisons together.</p>
        <p>We will be interested in testing the null hypothesis
against the alternative hypothesis</p>
        <p>0:  1 =  2 =  
(there is at least one pair with unequal means).</p>
        <p>
          : ∃1 ≤  ,  ≤  :   ≠  
Let   represent the mean sample i (i = 1, 2, 3, …, k):

 = 1 ∑ 
   =1   ,
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
 represent the grand mean, the mean of all the data
points:
 = 1 ∑
        </p>
        <p>=1 ∑ =1   ,
  2 represent the sample variance:
and  2 = 
variance  2 common to all k populations,
  2 =</p>
        <p>ANOVA is centered around the idea to compare the
variation between groups (levels) and the variation
within samples by analyzing their variances.</p>
        <p>Define the total sum of squares SST, sum of squares
for error (or within groups) SSE, and the sum of squares
for treatments (or between groups) SSC:</p>
        <p>SST = ∑ =1 ∑
 
 −</p>
        <p>,
 −1
,
,


F =


The one-way ANOVA, assuming the test conditions are
satisfied, uses the following test statistic:
Under H0 this statistic has Fisher’s distribution F (k – 1,
N – k). In case it holds for the test criteria</p>
        <p>F &gt;  1− , −1, − ,
where  1− , −1, − is (1 –  )quantile of F distribution
with k - 1 and N - k degrees of freedom, then hypothesis
H0 is rejected on significance level α
The results of the computations that lead to the F
statistic are presented in an ANOVA table, the form of
which is shown in the table1.</p>
        <p>This p-value says the probability of rejecting the null
hypothesis in case the null hypothesis holds. In case P &lt;
 ,
where α is chosen significance level, the
null
hypothesis is rejected with probability greater than
(13. Post hoc comparison procedures.</p>
        <p>One possible approach to the multiple comparison
problems is to make each comparison independently
using a suitable statistical procedure. For example, a
statistical hypothesis test could be used to compare each
pair of means,   and   , I, J = 1, 2, …, k; I ≠  , where
the null and alternative hypotheses are of the form
(17)
 0:   =   ,   ≠  
An alternative
way
to
test for a
difference
between   and   is to calculate a confidence interval
for   −   . A confidence interval is formed using a
point estimate a margin of error, and the formula
(Point estimate) ± (Margin of error).
(18)</p>
        <p>The point estimate is the best guess for the value
of   −   based on the sample data. The margin of error
reflects the accuracy of the guess based on variability in
the data. It also depends on a confidence coefficient,
which is often denoted by 1- . The interval is calculated
by subtracting the margin of error from the point
estimate to get the lower limit and adding the margin of
error to the point estimate to get the upper limit.
If the confidence interval for   −   does not contain
zero (there by ruling out that (   ≠   ), then the null
hypothesis is rejected
different at level of significance α.</p>
        <p>and   and   are
declared
The multiple comparison tests for population means, as
well as the F-test, have the same assumptions.</p>
        <p>There
are
many
different
multiple
comparison
procedures that deal with these problems. Some of these
procedures are as follows: Fisher’s method, Tukey’s
method, Scheffé’s</p>
        <p>method, Bonferroni’s adjustment
method, DunnŠidák method. Some require equal sample
sizes, while some do not. The choice of a multiple
comparison procedure used
with an ANOVA
will
depend on the type of experimental design used and the
comparisons of interest to the analyst.</p>
        <p>The Fisher (LSD) method essentially does not correct
for the type 1 error rate for multiple comparisons and is
generally not recommended relative to other options.</p>
        <p>The Tukey (HSD) method controls type 1 error very
well
and
is
generally
considered
an
acceptable
technique. There is also a modification of the test for
situation where the number of subjects is unequal across
cells called the Tukey-Kramer test.</p>
        <p>The Scheffé test can be used for the family of all
pairwise comparisons but will always give longer
confidence intervals than the other tests. Scheffé’s
procedure is perhaps the most popular of the post hoc
the
most flexible,
and
the
most
several different ways to control the
experiment wise error rate. One of the easiest ways to
procedures,
conservative.</p>
        <p>There
are
control experiment wise error rate is use the</p>
      </sec>
      <sec id="sec-2-2">
        <title>Bonferroni correction.</title>
        <p>If
we
plan
on
making m comparisons or conducting m significance
tests the Bonferroni correction is to simply use  ⁄
as
our significance level rather than α. This simple

correction guarantees that our experiment wise error rate
will be no larger than α. Notice that these results are
more conservative than</p>
        <p>with no adjustment. The
Bonferroni is probably the most commonly used post
hoc test, because it is highly flexible, very simple to
compute, and can be used with any type of statistical test
(e.g., correlations), not just post hoc tests with ANOVA.</p>
        <p>The Šidák method has a bit more power than the
Bonferroni method. So from a purely conceptual point
of view, the Šidák method is always preferred.</p>
        <p>The confidence interval for   −   is calculated
using the formula:
  −   ±  1− /2 ,  −  . √ 2 (</p>
        <p>There are many tests assumptions of homogeneity of
variances. Commonly used tests are the Bartlett (1937),
Hartley (1940, 1950), Cochran (1941), Levene (1960),
and Brown and Forsythe (1974) tests. The Bartlett,
Hartley
and</p>
        <p>Cochran
are
technically
test
of
homogeneity. The Levene and Brown and Forsythe
methods actually transform the data and then tests for
equality of means.
(25)
Note that Cochran's and Hartley's test assumes that there
are equal numbers of participants in each group.</p>
        <p>The tests of Bartlett, Cochran, Hartley and Levene
may be applied for number of samples k &gt; 2. In such
situation, the power of these tests turns out to be
different.</p>
        <p>When
the
assumption
of the
distribution holds for k &gt; 2 these tests may be ranked by
power decrease as follows: Cochran Bartlett Hartley
Levene. This preference order also holds in case when
the normality assumption is disturbed. An exception
concerns the situations when samples belong to some
distributions which have more heavy tails then the
normal law. For example, in case of belonging samples
to the Laplace distribution the Levene test turns out to
be slightly more powerful than three others.</p>
        <p>Bartlett’s test has the following test statistic:
B =  −1[( −  ).   2 − ∑
Where constant C = 1 +

 =1(  − 1).    2] , (26)
1
3( −)</p>
        <p>1
. (∑ =1   −1
−
meaning of all other symbols is evident (see section 2).
The hypothesis H0 is rejected on significance level α,
when</p>
        <p>B &gt;  2</p>
        <p>1− , −1
where  2
1− , −1 is the
critical
value of the
chisquare distribution with k - 1 degrees of freedom.
Cochran’s test is one of the best methods for detecting
cases where the variance of one of the groups is much
larger than that of the other groups. This test uses the
following test statistic:
The hypothesis H0 is rejected on significance level α,
where critical value   , , −1 is in special statistical
Hartley’s test uses the following test statistic:
The hypothesis H0 is rejected on significance level α,</p>
        <p>H &gt;   , , −1,
where critical value   , , −1 is in special statistical
Originally Levene’s test was defined as the one-way
analysis of variance on   = |  −   |, the absolute
when
tables.
when
C &gt;   , , −1
H =
 2.
 2

k is the number of groups and ni the sample size of
the ith group. The test statistic has Fisher’s distribution
F(k – 1, N – k ) and is given by:</p>
        <p>F =
( − )∑ =1   .(  − )2</p>
        <p>( −1)∑ =1 ∑ =1
(  −  )
2
.
absolute residuals do not meet any of these assumptions,
Levene’s
test is
an
approximate
test
of
homoscedasticity.</p>
        <p>Brown and Forsythe subsequently proposed the absolute
deviations from the median  ̃ of the ith group, so is
  = |</p>
        <p>−  ̃ |.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Methodology</title>
      <p>The study was undertaken in Kazaure north jigawa
Nigeria. The population for this study was one hundred
and sixty (160) maize seedlings grown and studied for
three weeks period. Information was collected from the
target population (maize seedlings) with the aid of
chlorophyll meter (SPAD 502 plus) to measure the
chlorophyll content of the leaves of each seedling. Data
analysis was with the aid of inferential statistics (one
way ANOVA). Independent variable for the study was
the amount of NKP measured in gram. The significance
test for the between treatment effect was the researcher’s
statistical evidence of the effect of the treatment on the
chlorophyll content of the leaves of the maize seedlings.
4.1. Test for normality and homogeneity of the data.
To begin ANOVA test, one must verify the validity of
the normality and homogeneity assumptions of the data
under study. These tests were based on Kolmongorov –
Siminov and levene’s statistic respectively. These
normality and homogeneity tests were conducted and
found tenable P &gt; 0.05, at 0.05 level for all the four
treatment levels and P &lt; 0.05, at 0.05 level respectively.
The results were presented in tables 2 and 3 below.
0.000
4.2. Test of significance for the treatment effect.
After the tests for the assumption of normality and
equality of variance (Homoscedesticity), the next thing
is to determine the significant effect of the independent
variable, in this
case
amount of nitrogen.</p>
      <p>
        The
significance of the treatment is based on F distribution,
the test revealed that the probability of the Fisher
distribution F (
        <xref ref-type="bibr" rid="ref3">3, 141</xref>
        ) was 0.000, less than the level of
significance of 0.05 (i.e, P &lt; 0.05). The null hypothesis
that there was no significant difference between the
mean chlorophyll was rejected. As presented in table 2.
      </p>
      <p>
        However, the ANOVA F test revealed a significant
effect of the amount of NPK on the chlorophyll content
of maize seedlings at P &lt; 0.05, [F (
        <xref ref-type="bibr" rid="ref3">3, 141</xref>
        ) = 51.190, P =
0.000], and also the tukey HSD test result indicated a
non-significant difference between the treatments 1 and
2 and treatments 3 and the control treatment
respectively. Altogether the results revealed that the
amount of NPK really do have effect on the chlorophyll
content of maize seedlings.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aczel</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          , Comple Business Statistics, (Irwing,
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Forsythe</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , “
          <article-title>Robust tests for the equality of variances</article-title>
          ,
          <source>” journal of the American Statistical Association</source>
          ,
          <fpage>365</fpage>
          -
          <lpage>367</lpage>
          (
          <year>1974</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Montgomery</surname>
            ,
            <given-names>D.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Runger</surname>
            ,
            <given-names>G.C.</given-names>
          </string-name>
          , Applied Statistics and Probability for Engineers, (John wiley &amp; Sons,
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ostertagova</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , Applied Statistics (in Slovac), Elfa, Kosice,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Parra-Frutos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , “
          <article-title>The bahaviour of the modified levene's test when data are not normally distributed</article-title>
          ,
          <source>” comput Stat</source>
          , Springer,
          <fpage>671</fpage>
          -
          <lpage>693</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Rafter</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abell</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braselton</surname>
            .
            <given-names>J.P.</given-names>
          </string-name>
          , “Multiple Comparison Methods for Means,
          <source>” SIAM Review</source>
          ,
          <volume>44</volume>
          (
          <issue>2</issue>
          ).
          <fpage>259</fpage>
          -
          <lpage>278</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Rykov</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Balakrisnan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikulin</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <source>Mathematical and Statistical Models and methods in Relability</source>
          , Springer, (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Stephens</surname>
            ,
            <given-names>L.J.</given-names>
          </string-name>
          , Advanced Statistics demystified,
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Aylor</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Business Statistics.www.palgrave.com.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>