0% found this document useful (0 votes)
11 views34 pages

Lecture 5 biometry

The document outlines various statistical methods for comparing treatment means, including Fisher’s LSD, Duncan’s DMRT, and Tukey’s method, each with specific applications and assumptions. It emphasizes the importance of meeting assumptions for analysis of variance and provides remedies for variance heterogeneity, such as data transformation techniques like logarithmic, square root, and arc sine transformations. The document also details the steps for applying these methods and transformations to ensure valid comparisons between treatment means.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views34 pages

Lecture 5 biometry

The document outlines various statistical methods for comparing treatment means, including Fisher’s LSD, Duncan’s DMRT, and Tukey’s method, each with specific applications and assumptions. It emphasizes the importance of meeting assumptions for analysis of variance and provides remedies for variance heterogeneity, such as data transformation techniques like logarithmic, square root, and arc sine transformations. The document also details the steps for applying these methods and transformations to ensure valid comparisons between treatment means.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Comparison between treatment means

• The most commonly used test procedures


for treatment mean comparisons are:
A, The Fisher’s Least Significant difference
test (LSD)
B, The Duncan’s multiple range test (DMRT)
C, the Tukey’s method test
D. Bonferroni test
The LSD test
 LSD test is simple and most commonly used
method for treatment mean comparisons
 It uses a single LSD value which serves as a
reference between significant and non-
significant differences between any pair of
treatment means
 Two treatments are declared significantly
different at given level of significance, if their
difference exceeds the computed LSD value,
otherwise they are not significantly different
• If the LSD test is must to use, apply it
only when the F-test for the treatment
effects is significant and the number of
treatments is not too large.

2M SE
• LSD at a level = t a/2
r

Calculate treatment mean differences if


absolute value of
| Yi - Yj |> LSD
Duncan’s multiple range test (DMRT)

 The DMRT is used for evaluation of all


possible pairs of treatment means
 It is used when the number of treatments is
large

 The procedure is the same with LSD, but it


involves calculation of numerical boundaries
that allows for the classification of the
difference b/n any two treatment means as
significant & non-significant
The steps are:

Treatments Mean yield rank


T2 2678 1
T3 2552 2
T4 2128 3
T1 2127 4
T5 1796 5
T6 1681 6
T7 1316 7
Step 2. Compute the standard error of the
mean difference

2MSE
SEmean 
r
• Step 3. compute (t-1) values of the shortest
significance ranges as
Rp = (rp) (SEd)/ 2
The t-1 = 6 Rp values are then computed
P rp (0.05) SEd Rp = (rp) (Sed)/ 2

2 2.94 217.68 453


3 3.09 217.68 476
4 3.18 217.68 489
5 3.24 217.68 499
6 3.30 217.68 508
7 3.33 217.68 513
Step 4. Identify all treatment means that do not
differ significantly from each other
Compute the d/ce b/n the largest treatment mean
& the largest Rp value (Rp value at t = 6) &
declare all treatment means whose values are
less than the computed d/ce as significantly
different from the largest treatment mean
Eg. 2678 – 513 = 2165 kg/ha, thus all trt
means except that of T3, are less than 2165
kg/ha. Hence they are significantly different
from T2.
2678 – 2552 = 126 kg/ha is < 453, they are
non-significant.
Tukey’s method test
 Tukey: Controls the experimentwise (family)
error rate, this total error rate should not exceed
a. (If we have a balanced design it is exactly a)
 Two means (expectations) are significantly
different if the absolute value of their sample
mean differences exceed
M SE
q α (a, df)
r

 Where a is the family error rate and df, is degrees of


freedom for SSE, and a is the number of groups.
 q is found in tables at end of some statistical books
Transformation of data
Certain mathematical assumptions are required
for the analysis of variance
Additive effects: treatment effects and
environmental effects are additive.

Independence of errors: Experimental


errors are independent
Homogeneity of variance: Experimental
errors have a common variance.

•Normal distribution: Experimental errors


are normally distributed.
Failure to meet one or more of these
assumptions affects both the level of
significance and sensitivity of the F test in
the analysis of variance.

So any departure from these assumptions


must be corrected before the analysis of
variance is applied.
Remedies for handling variance
heterogeneity

Methods of detecting variance heterogeneity


Step 1.Comput the variance & mean across
replications
Step 2. Plot a scatter diagram between the mean
value and variance.
Step 3. Visually examine the scatter diagram to
identify the pattern of relationship, if any between
the mean and the variance
• Homogenous variance – normally
distributed data having common variance
• Heterogeneous variance - where the
variance is functionally related to the
mean
• Heterogeneous variance - when there is
no functional relationship between the
variance and mean
Data transformation
 Most appropriate remedial measure for
variance heterogeneity.
 Where the variance and mean are functionally
related.
 With this technique, original data are
converted into new scale resulting in a new
data set that is expected to satisfy the condition
of homogeneity of variance
 With transformation, the comparative
values between treatments are not
altered and comparisons between them
remain valid.
 Appropriate data transformation technique
to be used depends on the type of
relationship between the variance and
the mean.
 Most commonly used data transformation
techniques in agricultural research are
discussed below.
i. Logarithmic transformation
 Appropriate for data where the standard deviation
is proportional to the mean or where the effects are
multiplicative.
 These conditions are found in data that are whole
numbers & cover wide range of values.
 Data on the number of insects per plot or number of
egg masses per plant are typical examples.
 To transform data set into the logarithmic scale,
simply take the logarithm of each and every
component of the data set.
 If the data set involves small values e.g. less
than 10, log (X +1) should be used instead of
log X, where X is the original data.
 To illustrate the procedure for applying the
logarithmic transformation, we used data on
the number of living larvae on rice plants
treated with various rates of an insecticide
from a RCD experiment with four replications.
Number of living larvae recovered from insecticide treatments
Larvae number Treat. Treat. mean
total
Insectici Rep I Rep II Rep III Rep
de IV
1 9 12 0 1 22 5.6 (12)
2 4 8 5 1 18 4.5 (7)
3 6 15 6 2 29 7.25 (13)
4 9 6 4 5 24 6.00 (5)
5 27 17 10 10 64 16.00 (17)
6 35 28 2 15 80 20.00 (33))
7 1 0 0 0 1 0.25 (1)
8 10 0 2 1 13 3.25 (10)
9 4 10 15 5 34 8.50 (11)
Total 105 96 44 40 285
 Step 1. Verify the functional relationship b/n the
mean & variance using a scatter diagram. For our
example we use range instead of variance.
 Figure shows linear relationship between the
range and the mean (i.e. the range increases
proportionally with the mean), suggesting the use
of logarithmic transformation.
 Step 2. Because some of the values in the table
are less than 10, log (x+1) is applied instead of
Log X.
 Transformed data are shown below.
Log (X+1)Transformed data are shown below.
Larvae number Treat. Treat.
total mean
Insectic Rep I Rep II Rep III Rep IV
ide
1 1.000 1.114 0.000 0.301 2.414 4.02
2 0.699 0.954 0.778 0.301 2.732 4.82
3 0.845 1.204 0.845 0.477 3.371 6.96
4 1.000 0.845 0.699 0.778 3.322 6.77
5 1.447 1.255 1.041 1.041 4.785 15.71
6 1.556 1.462 0.477 1.204 4.699 14.96
7 0.301 0.000 0.000 0.000 0.301 1.19
8 1.041 0.000 0.477 0.301 1.819 2.85
9 0.699 1.041 1.204 0.778 3.722 8.53
Total 8.589 7.876 5.522 5.182 27.169
 Step 3. Verify the success of the logarithmic
transformation in achieving the desired
homogeneity of variance, by applying step 1 to
the transformed data.

 The result based on the transformed data


should show no apparent relationship between
the range and the mean.

 Step 4. Construct the ANOVA on the


transformed data in the usual manner.
ii. Square root transformation

Appropriate for data consisting of small whole


numbers.
Y= X
Data obtained by counting rare events, like
number of infested plants in a plot.
Number of insects caught in traps

For these data the variance tends to be


proportional to the mean.
Square root transformation is appropriate for
percentage data where the range is between 0 and
30% Or between 70 and 100 %.

If most values in the data are small (e.g. less than
10), especially with zeros present, (X+0.5)1/2
should be used instead of x1/2, where X is original
data.
iii. Arc Sine Transformation or sin-1 X
This is appropriate for data on proportions, data
from a count & data expressed as decimal
fractions or percentages.

The value of 0% should be substituted by (1/4n)


and the value of 100% by (100-1/4n), where n is
the number of units upon which the percentage
data was based (denominator used in computing
the percentage).

You might also like