Chi square

Published on January 2017 | Categories: Documents | Downloads: 70 | Comments: 0 | Views: 368
of 36
Download PDF   Embed   Report

Comments

Content

Probability and Hypothesis
Testing

UNIT 17

CHI-SQUARE TEST

STRUCTURE
17.0
17.1
17.2
17.3
17.4
17.5
17.6
17.7
17.8
17.9
17.10
17.11
17.12
17.13

Objectives
Introduction
Chi-Square Distribution
Chi-Square Test for Independence of Attributes
Chi-Square Test for Goodness of Fit
Conditions for Applying Chi-Square Test
Cells Pooling
Yates Correction
Limitations of Chi-Square Test
Let Us Sum Up
Key Words
Answers to Self Assessment Exercises
Terminal Questions/Exercises
Further Reading
Appendix Tables

17.0 OBJECTIVES
After studying this unit, you should be able to:
l
l

l

l

explain and interpret interaction among attributes,
use the chi-square distribution to see if two classifications of the same data
are independent of each other,
use the chi-square statistic in developing and conducting tests of goodnessof-fit, and
analyse the independence of attributes by using the chi-square test.

17.1 CHI-SQUARE DISTRIBUTION
In the previous two units, you have studied the procedure of testing hypothesis
and using some of the tests like Z-test and t-test. In one sample test you have
learned tests to determine whether a sample mean or proportion was
significantly different from the respective population mean or proportion. But in
practice the requirement in your research may not be confined to only testing
of one mean/proportion of a population. As a researcher you may be interested
in dealing with more than two populations. For example, you may be interested
in knowing the differences in consumer preferences of a new product among
people in the north, the south, and the north-east of India. In such situations the
tests you have learned in the previous units do not apply. Instead you have to
use chi-square test.
Chi-square tests enable us to test whether more than two population proportions
are equal. Also, if we classify a consumer population into several categories
(say high/medium/low income groups and strongly prefer/moderately prefer/
indifferent/do not prefer a product) with respect to two attributes (say consumer
income and consumer product preference), we can then use chi-square test to
test whether two attributes are independent of each other. In this unit you will
learn the chi-square test, its applications and the conditions under which the chisquare test is applicable.
132

Chi-Square Test

17.2 CHI-SQUARE DISTRIBUTION
The chi-square distribution is a probability distribution. Under some proper
conditions the chi-square distribution can be used as a sampling distribution of
chi-square. You will learn about these conditions in section 17.5 of this unit.
The chi-square distribution is known by its only parameter – number of degrees
of freedom. The meaning of degrees of freedom is the same as the one you
have used in student t-distribution. Figure 17.1 shows the three different chisquare distributions for three different degrees of freedom.
df = 2

Probability

df = 3

df = 4

0

2

4

6

8

10

12

14

16

χ2
Figure 17.1. Chi-Square Sampling Distributions for df=2, 3 and 4

It is to be noted that as the degrees of freedom are very small, the chi-square
distribution is heavily skewed to the right. As the number of degrees of
freedom increases, the curve rapidly approaches symmetric distribution. You
may be aware that when the distribution is symmetric, it can be approximated
by normal distribution. Therefore, when the degrees of freedom increase
sufficiently, the chi-square distribution approximates the normal distribution. This
is illustrated in Figure 17.2.

Probability

df = 2
df = 4

df = 10
df = 20

0

5

10 15 20 25 30 35 40
χ2

Figure 17.2. Chi-Square Sampling Distributions for df=2, 4, 10, and 20

Like student t-distribution there is a separate chi-square distribution for each
number of degrees of freedom. Appendix Table-1 gives the most commonly
used tail areas that are used in tests of hypothesis using chi-square distribution.
It will explain how to use this table to test the hypothesis when we deal with
examples in the subsequent sections of this unit.

133

Probability and Hypothesis
Testing

17.3 CHI-SQUARE TEST FOR INDEPENDENCE OF
ATTRIBUTES
Many times, the researchers may like to know whether the differences they
observe among several sample proportions are significant or only due to chance.
Suppose a sales manager wants to know consumer preferences of consumers
who are located in different geographic regions of a country, of a particular
brand of a product. In case the manager finds that the difference in product
preference among the people located in different regions is significant, he/she
may like to change the brand name according to the consumer preferences. But
if the difference is not significant then the manager may conclude that the
difference, if any, is only due to chance and may decide to sell the product
with the same name. Therefore, we are trying to determine whether the two
attributes (geographical region and the brand name) are independent or
dependent. It should be noted that the chi-square test only tells us whether two
principles of classification are significantly related or not, but not a measure of
the degree or form of relationship. We will discuss the procedure of testing the
independence of attributes with illustrations. Study them carefully to understand
the concept of χ2 test.

Illustration 1
Suppose in our example of consumer preference explained above, we divide
India into 6 geographical regions (south, north, east, west, central and north
east). We also have two brands of a product brand A and brand B.
The survey results can be classified according to the region and brand
preference as shown in the following table.
Consumer preference
Region
South
North
East
West
Central
North-east
Total

Brand A
64
24
23
56
12
12
191

Brand B
16
6
7
44
18
18
109

Total
80
30
30
100
30
30
300

In the above table the attribute on consumer preference is represented by a
column for each brand of the product. Similarly, the attribute of region is
represented by a row for each region. The value in each cell represents the
responses of the consumers located in a particular region and their preference
for a particular brand. These cell numbers are referred to as observed (actual)
frequencies. The arrangement of data according to the attributes in cells is
called a contingency table. We describe the dimensions of a contingency table
by first stating the number of rows and then the number of columns. The table
stated above showing geographical region in rows (6) and brand preference in
columns (2) is a 6 × 2 contingency table.
In the 6 × 2 contingency table stated above (the example of brand preference)
each cell value represents a frequency of consumers classified as having the
corresponding attributes.We also stated that these cell values are referred to as
134

observed frequencies. Using this data we have to determine whether or not
the consumer geographical location (region) matters for brand preference. Here
the null hypothesis (H0) is that the brand preference is not related to the
geographical region. In other words, the null hypothesis is that the two
attributes, namely, brand preference and geographical location of the consumer
are independent. As a basis of comparison, we use the sample results that
would be obtained on the average if the null hypothesis of independence was
true. These hypothetical data are referred to as the expected frequencies.

Chi-Square Test

We use the following formula for calculation of expected frequencies (E).

E=

Row total × Column total
Total

For example, the cell entry in row-1 and column-2 of the brand preference 6x2
contingency table referred to earlier is:

E=

80 × 191 15280
=
= 50.93
300
300

Accordingly, the following table gives the calculated expected frequencies for
the rest of the cells of the 6x2 contingency table.
Calculation of the Expected Frequencies
Consumer Preference
Region

Brand A

Brand B

Total

South

(80×191)/300 = 50.93

(80×109)/300 = 29.07

80

North

(30×191)/300 = 19.10

(30×109)/300 = 10.90

30

East

(30×191)/300 = 19.10

(30×109)/300 = 10.90

30

West

(100×191)300= 63.67

(100×109)/300 =36.33

100

Central

(30×191)300 = 19.10

(30×109)/300 = 10.90

30

Northern

(30×191)/300 = 19.10

(30×109)/300 = 10.90

30

Total

191

109

300

We use the following formula for calculating the chi-square value.
χ2 = ∑

(O i − E i )
Ei

Where, χ2 = chi-square; Oi = observed frequency; Ei = expected frequency;
and
Σ = sum of.
To ascertain the value of chi-square, the following steps are followed.
1) Subtract Ei from Oi for each of the 12 cells and square each of these differences
(O i–E i) 2.
2) Divide each squared difference by Ei and obtain the total, i.e.,



(O i − E i ) 2
Ei

.

This gives the value of chi-squares which may be ranged from zero to infinity.
Thus, value of χ2 is always positive.

135

Probability and Hypothesis
Testing

Now we rearrange the data given in the above two tables for comparing the
observed and expected frequencies. The rearranged observed frequencies,
expected frequencies and the calculated χ2 value are given in the following
Table.
Row/Column

Observed
frequencies
(O i)

Expected
(O i–E i)
frequencies
(Ei)

(O i–E i) 2

(O i–E i) 2/E i

(1,1)

64

50.93

13.07

170.74

3.35

(2,1)

24

19.10

4.90

24.01

1.26

(3,1)

23

19.10

3.90

15.21

0.80

(4,1)

56

63.67

–7.67

58.78

0.92

(5,1)

12

19.10

–7.10

50.41

2.64

(6,1)

12

19.10

–7.10

50.41

2.64

(1,2)

16

29.07

–13.07

170.74

5.87

(2,2)

6

10.90

–4.90

24.01

2.20

(3,2)

7

10.90

–3.90

15.21

1.40

(4,2)

44

36.33

7.67

58.78

1.62

(5,2)

18

10.90

7.10

50.41

4.62

(6,2)

18

10.90

7.10

50.41

4.62

300

300

χ2 = 31.94

With r × c (i.e. r-rows and c-columns) contingency table, the degrees of
freedom are found by (r–1) x (c–1). In our example, we have 6 × 2
contingency table. Therefore, we have (6–1) × (2–1) = 5 × 1 = 5 degrees of
freedom. Suppose we take 0.05 as the significance level (a). Then at 5 degrees
of freedom and a = 0.05 significance level the table value (from Appendix
Table-4) is 11.071. Since the calculated χ2 value (31.94) is greater than the
table value of (11.071), we reject the null hypothesis and conclude that the
brand preference is not independent of the geographical location of the
customer. Therefore, the sales manager needs to change the brand name across
the regions.

Illustration 2
A TV channel programme manager wants to know whether there are any
significant differences among male and female viewers between the type of the
programmes they watch. A survey conducted for the purpose gives the
following results.

136

Type of TV

Chi-Square Test

Viewers Sex

programme

Male

Female

Total

News

30

10

40

Serials

20

40

60

Total

50

50

100

Calculate χ2 statistic and determine whether type of TV programme is
independent of the viewers' sex. Take 0.10 significance level.

Solution: In this example, the null and alternate hypotheses are:
H0: The viewers sex is independent of the type of TV programme (There is no
association among the male and female viewers).
H1: The viewers sex is not independent of the type of TV programme.
We are given the observed frequencies in the problem. The expected
frequencies are calculated in the same way as we have explained in illustration
1. The following table gives the calculated expected frequencies.
Type of TV
Programme

Viewers Sex
Female

Male

News

(40×50)/100 = 20

(40×50)/100 = 20

40

Serials

(60×50)/100 = 30

(60×50)/100 = 30

60

Total

50

50

100

Total

Now we rearrange the data on observed and expected frequencies and
calculate the χ2 value. The following table gives the calculated χ2 value.
(Row, Column) Observed
frequencies

Expected (O i–E i)
frequencies

(O i–E i) 2

(O i–E i) 2/E i

(Oi)

(Ei)

(1,1)

30

20

10

100

5.00

(2,1)

20

30

–10

100

3.33

(1,2)

10

20

–10

100

5.00

(2,2)

40

30

10

100

3.33
χ2 =16.66

Since we have a 2x2 contingency table, the degrees of freedom will be (r–1) ×
(c–1) = (2–1) × (2–1) = 1× 1 = 1. At 1 degree of freedom and 0.10
significance level the table value (from Appendix Table-4) is 2.706. Since the
calculated χ2 value (16.66) is greater than table value of χ2 (2.706) we reject
the null hypothesis and conclude that the type of TV programme is dependent
on viewers' sex. It should, therefore, be noted that the value of χ2 is greater
than the table value of x2 the difference between the theory and observation is
significant.

137

Probability and Hypothesis
Testing

Self Assessment Exercise A
1)

The following are the independent testing situations, calculated chisquare values and the significance levels. (i) state the null hypothesis,
(ii) determine the number of degrees of freedom, (iii) calculate the
corresponding table value, and (iv) state whether you accept or reject
the null hypothesis.
a) Type of the car (small, family, luxury) versus attitude by sex
(preferred, not preferred). χ2 = 10.25 and a = 0.05.
b) Income distribution per month (below Rs 10000, Rs 10000-20000,
Rs 20000-30000, Rs 30000 and above) and preference for type of
house with number of bed rooms (1, 2, 3, 4 and above). χ2 = 28.50
and a = 0.01.
c) Attitude towards going to a movie or for shopping versus sex (male,
female). χ2 = 8.50 and a = 0.01.
d) Educational level (illiterate, literate, high school, graduate) versus
political affiliation (CPI, Congress, BJP, BSP). χ2 = 12.65 and α =
0.10.
.........................................................................................................
...............................................................................................................
......................................................................................................
......................................................................................................

2)

The following are the number of columns and rows of a contingency
table. Determine the number of degrees of freedom that the chi-square
will have
a) 6 rows, 6 columns

c) 3 rows, 5 columns

b) 7 rows, 2 columns

d) 4 rows, 8 columns

..............................................................................................................
3)

A company has introduced a new brand product. The marketing
manager wants to know whether the preference for the brand is
distributed independent of the consumer’s education level. The survey of
a sample of 400 consumers gave the following results.
Illiterates

Literates High School Graduate Total

Bought new brand

50

55

45

60

210

Did not buy new
brand

50

45

55

40

190

100

100

100

100

400

Total

a) Calculate the expected frequencies and the chi-square value.
b) State the null hypothesis.
c) State whether you accept or reject the null hypothesis at a = 0.05.
...............................................................................................................
...............................................................................................................
138

...............................................................................................................

...............................................................................................................

Chi-Square Test

...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................

17.4 CHI-SQUARE TEST FOR GOODNESS OF FIT
In unit 14, you have studied some probability distributions such as binomial,
Poisson and normal distributions. When we consider a sample data from a
population we try to assume the type of distribution the sample data follows.
The chi-square test is useful in deciding whether a particular probability
distribution such as the binomial, Poisson or normal distribution is the appropriate
probability distribution. This allows us to validate our assumption about the
probability distribution of the sample data. The chi-square test procedure used
for this purpose is called goodness-of-fit test. The test also indicates whether
or not the frequency distribution for the sample population has a particular
shape, such as the normal curve (symmetric distribution). This can be done by
testing whether there is a significant difference between an observed frequency
distribution and an assumed theoretical frequency distribution. Thus by applying
chi-square test for goodness of fit, we can determine whether the observed
data constitutes a sample drawn from the population with assumed theoretical
distribution. In this section we use chi-square test for goodness-of-fit to make
inferences about the type of distribution.
The logic inherent in the chi-square test allows us to compare the observed
frequencies (Oi) with the expected frequencies (Ei). The expected frequencies
are calculated on the basis of our theoretical assumptions about the population
distribution. Let us explain the procedure of testing by going through some
illustrations.

Illustration 3
A sales man has 3 products to sell and there is a 40% chance of selling each
product when he meets a customer. The following is the frequency distribution
of sales.
No. of products sold per sale:
Frequency of the number of sales:

0

1

2

3

10

40

60

20

At the 0.05 level of significance, do these sales of products follow a binomial
distribution?

Solution: In this illustration, the sales process is approximated by a binomial
distribution with P=0.40 (with a 40% chance of selling each product).
Ho: The sales of three products has a binomial distribution with P=0.40.

139

Probability and Hypothesis
Testing

H1: The sales of three products do not have a binomial distribution with P=0.40.
Before we proceed further we must calculate the expected frequencies in order
to determine whether the discrepancies between the observed frequencies and
the expected frequencies (based on binomial distribution) should be ascribed to
chance. We began determining the binomial probability in each situation of
sales (0, 1, 2, 3 products sold per sale). For three products, we would find the
probabilities of success by consulting the binomial probabilities Appendix Table1. By looking at the column labelled as n = 3 and p = 0.40 we obtained the
following figures of binomial probabilities of the sales.
No. of products
sold per sale (r)

Binomial probabilities
of the sales

0

0.216

1

0.432

2

0.288

3

0.064
1.000

We now calculate the expected frequency of sales for each situation. There are
130 customers visited by the salesman. We multiply each probability by 130 (no.
of customers visited) to arrive at the respective expected frequency. For
example, 0.216 × 130 = 28.08.
The following table shows the observed frequencies and the expected
frequencies.
No. of products
sold per sale

Observed
frequency

Binomial
probability

Number of
customers
visited
(4)

Expected
frequency

(1)
(4)

(2)

(3)

0

10

0.216

130

28.08

1

40

0.432

130

56.16

2

60

0.288

130

37.44

3

20

0.064

130

8.32

Total

130

(5) = (3) ×

Now we use the chi-square test to examine the significance of differences
between observed frequencies and expected frequencies. The formula for
calculating chi- square is
χ2 = ∑

(O i − E i ) 2
Ei

The following table gives the calculation of chi-square.
140

Observed
frequencies
(O i)

Expected
frequencies
(Ei)

(O i–E i)

(O i–E i) 2

(O i–E i) 2/E i

10

28.08

–18.08

326.89

11.64

40

56.16

–16.16

261.15

4.65

60

37.44

22.56

508.95

13.59

20

8.32

11.68

136.42

16.40

130

Chi-Square Test

χ2 = 46.28

130

In order to draw inferences about this calculated value of χ2 we are required
to compare this with table value of χ2. For this we need: (i) degrees of
freedom (n-1), and (ii) level of significance. In the problem we are given that
the level of significance is 0.05. The number of expected situations is 4. That is
(0,1,2,3 products sold per sale) n = 4. Therefore, the degrees of freedom will
be 3 (i.e., n-1 =
4–1 = 3). The table value from Appendix Table-4 is 7.815 at 3 degrees of
freedom and 0.05 level of significance. Since the calculated value (χ2 = 46.28)
is greater than the table value (7.815), we reject the null hypothesis and accept
the alternative hypothesis. We conclude that the observed frequencies do not
follow the binomial distribution.
Let us take another illustration which relates to the normal distribution.

Illustration 4
In order to plan how much cash to keep on hand, a bank manager is interested
in seeing whether the average deposit of a customer is normally distributed with
mean Rs. 15000 and standard deviation Rs. 6000. The following information is
available with the bank.
Deposit (Rs)
Number of depositors

Less than 10000
30

10000-20000 More than 20000
80

40

Calculate the χ2 statistic and test whether the data follows a normal distribution
with mean Rs.15000 and standard deviation Rs.6000 (take the level of
significance
(a) as 0.10).

Solution: In this illustration, the assumption made by the bank manager is
that the pattern of deposits follows a normal distribution with mean Rs.15000
and standard deviation Rs.6000. Therefore, in testing the goodness-of-fit you
may like to state the following hypothesis.
H0: The sample data of deposits is from a population having normal distribution
with mean Rs.15000 and standard deviation Rs.6000.
H1: The sample data of deposits is not from a population having normal
distribution with mean Rs.15000 and standard deviation Rs.6000.
In order to calculate the χ2 value we must have expected frequencies. The
expected frequencies are determined by multiplying the proportion of population
values within each class interval by the total sample size of observed
frequencies. Since we have assumed a normal distribution for our population,
141

Probability and Hypothesis
Testing

the expected frequencies are calculated by multiplying the area under the
respective normal curve and the total sample size (n=150).
For example, to obtain the area for deposits less than Rs.10000, we calculate
the normal deviate as follows:

z=

x − µ 10000 − 15000 − 5000
=
=
= – 0.83
σ
6000
6000

From Appendix Table-3 (given at the end of this unit), this value (–0.83)
corresponds to a lower tail area of 0.5000–0.2967 = 0.2033. Multiplying 0.2033
by the sample size (150), we obtain the expected frequency 0.2033 × 150 =
30.50 depositors.
The calculations of the remaining expected frequencies are shown in the
following table.
Upper limit
Normal deviate
of the deposit
x–15000
range (x)
z = 6000

Area left
to x

Area of
deposit range

Expected
frequency
(Depositors)

(3)

(4)

(5)=(4)×150

(1)

(2)

10000

–0.83

0.2033

0.2033

30.50

20000

0.83

0.7967

0.5934

89.01

>20000



1.0000

0.2033

30.50

1.0000

150

We should note that from Appendix Table-3 for 0.83 the area left to x is
0.5000 + 0.2967 = 0.7967 and for ∞ the area left to x is 0.5000 + 0.5000 =
1.0000. Similarly, the area of deposit range for normal deviate 0.83 = 0.7967–
0.2033 = 0.5934 and for ∞ = 1.0000–0.7967 = 0.2033.
Once the expected frequencies are calculated, the procedure for calculating χ2
statistic will be the same as we have seen in illustration 3.
χ2 = ∑

(O i − E i ) 2
Ei

The following table gives the calculation of chi-square.

142

Observed
frequencies(Oi)

Expected
frequencies(Ei)

(O i–E i)

(O i–E i) 2

(O i–E i) 2/E i

30

30.50

–0.50

0.2450

0.0080

80

89.01

–9.01

81.1801

0.9120

40

30.50

9.51

90.3450

2.9626

150

150

χ2 = 3.8827

Since n = 3, the number of degrees of freedom will be n–1 = 3–1 = 2 and we
are given 0.10 as the level of significance. From Appendix Table-4 the table
value of χ2 for df = 2 and α = 0.10 is 4.605. Since the calculated value of
χ2 (3.8827) is less than the table value we accept the null hypothesis and

conclude that the data are well described by the normal distribution with mean
= Rs.15000 and standard deviation = Rs. 6000.

Chi-Square Test

Let us consider an illustration which relates to Poisson Distribution.

Illustration 5
A small car company wishes to determine the frequency distribution of
warranty financed repairs per car for its new model car. On the basis of past
experience the company believes that the pattern of repairs follows a Poisson
distribution with mean number of repairs ( l) as 3. A sample data of 400
observations is provided below:
No. of repairs
more per car

0

1

2

3

4

5 or

No. of cars

20

57

98

85

78

62

i) Construct a table of expected frequencies using Poisson probabilities with l =3.
ii) Calculate the χ2 statistic and give your conclusions about the null hypothesis
(take the level of significance as 0.05).

Solution: For the above problem we formulate the following hypothesis.
H0: The number of repairs per car during warranty period follows a Poisson
probability distribution.
H1: The number of repairs per car during warranty period does not follow a Poisson
probability distribution.
As usual the expected frequencies are determined by multiplying the probability
values (in this case Poisson probability) by the total sample size of observed
frequencies. Appendix Table-2 provides the Poisson probability values. For
λ = 3.0 and for different x values we can directly read the probability values.
For example for λ = 3.0 and x = 0 the Poisson probability value is 0.0498, for
λ = 3.0 and x = 1 the Poisson probability value is 0.1494 and so on … .
The following table gives the calculated expected frequencies.
No. of repairs
per car (x)
(1)

Poisson probability
(2)

Expected frequency
Ei = (2) × 400
(3)

0

0.0498

19.92

1

0.1494

59.76

2

0.2240

89.60

3

0.2240

89.60

4

0.1680

67.20

5 or more

0.1848

73.92

Total

1.0000

400

It is to be noted that from Appendix Table-2 for λ = 3.0 we have taken the
Poisson probability values directly for x = 0,1,2,3 and 4. For x = 5 or more we
added the rest of the probability values (for x = 5 to x = 12) so that the sum
of all the probability for x = 0 to x = 5 or more will be 1.000.

143

Probability and Hypothesis
Testing

As usual we use the following formula for calculating the chi-square (χ2) value.
2

χ =∑

(O i − E i ) 2
Ei

The following table gives the calculated χ2 value

Observed
frequencies(Oi)

Expected
frequencies(Ei)

(O i–E i)

(O i–E i) 2

(O i–E i) 2/E i

20

19.92

0.08

0.0064

0.0003

57

59.76

– 2.76

7.6176

0.1275

98

89.60

8.40

70.5600

0.7875

85

89.60

– 4.60

21.1600

0.2362

78

67.20

10.80

116.6400

1.7357

62

73.92

– 11.92

142.0864

1.9222

400

400

χ2 = 4.8094

Since n = 6, the number of degrees of freedom will be n–1 = 6–1 = 5 and we
are given a = 0.05 as the level of significance. From table 4, the table value of
χ2 for 5 degrees of freedom and a = 0.05 is 11.071. Since the calculated
value of χ2 = 4.8094 which is less than the table value of χ2 =11.071, we
accept the null hypothesis (H0) and conclude that the data follows a Poisson
probability distribution with l = 3.0

Illustration 6
In order to know the brand preference of two washing detergents, a sample of
1000 consumers were surveyed. 56% of the consumers preferred Brand X
and 44% of the consumers preferred Brand Y. Do these data conform to the
idea that consumers have no special preference for either brand? Take
significance level as 0.05.

Solution: In this illustration, we assume that brand preference follows a
uniform distribution. That is, ½ of the consumers prefer Brand A and other ½
of the consumers prefer Brand B.
Therefore, we have the following hypothesis.
H0: Brand name has no special significance for consumer preference.
H1: Brand name has special significance for consumer preference.
Since the consumer preference data is given in proportion we will convert it
into frequencies. The number of consumers who preferred Brand X are 0.56 ×
1000 = 560 and Brand Y are 0.44 × 1000 = 440. The corresponding expected
frequencies are ½ × 1000 = 500 each brand.
The following table gives calculated χ2 value.

144

Observed
frequencies(Oi)
20

Expected
frequencies(Ei)

(O i–E i)

(O i–E i) 2

19.92

0.08

0.0064

(O i–E i) 2/E i

Chi-Square Test

0.0003

560

500

60

3600

7.2

440

500

– 60

3600

7.2

1000

1000

χ2 = 14.4

The table value (by consulting the Appendix Table-4) at 5% significance level
and n–1 = 2–1 = 1 degree of freedom is 3.841. Since the value of calculated
χ2 is 14.4 which is greater than table value, we reject the null hypothesis and
conclude that the brand names have special significance for consumer
preference.

17.5 CONDITIONS FOR APPLYING CHI-SQUARE
TEST
To validate the chi-square test, the data set available, needs to fulfill certain
conditions. Sometimes these conditions are also called precautions about using
the chi-square test. Therefore, when ever you use the chi-square test the
following conditions must be satisfied:
a) Random Sample: In chi-square test the data set used is assumed to be a random
sample that represents the population. As with all significance tests, if you have a
random sample data that represents population data, then any differences in the
table values and the calculated values are real and therefore significant. On the
other hand, if you have a non-random sample data, significance cannot be established,
though the tests are nonetheless sometimes utilised as crude “rules of thumb” any
way. For example, we reject the null hypothesis, if the difference between observed
and expected frequencies is too large. But if the chi-square value is zero, we
should be careful in interpreting that absolutely no difference exists between
observed and expected frequencies. Then we should verify the quality of data
collected whether the sample data represents the population or not.
b) Large Sample Size: To use the chi-square test you must have a large
sample size that is enough to guarantee the test, to test the similarity
between the theoretical distribution and the chi-square statistic. Applying chisquare test to small samples exposes the researcher to an unacceptable rate
of type-II errors. However, there is no accepted cutoff sample size. Many
researchers set the minimum sample size at 50. Remember that chi-square
test statistic must be calculated on actual count data (nominal, ordinal or
interval data) and not substituting percentages which would have the effect
of projecting the sample size as 100.
c) Adequate Cell Sizes: You have seen above that small sample size leads to
type-II error. That is, when the expected cell frequencies are too small, the
value of chi-square will be overestimated. This in turn will result in too
many rejections of the null hypothesis. To avoid making incorrect inferences
from chi- square tests we follow a general rule that the expected frequency
in any cell should be a minimum of 5.
d) Independence: The sample observations must be independent.
e) Final values: Observations must be grouped in categories.

145

Probability and Hypothesis
Testing

17.6 CELLS POOLING
In the previous section we have seen that the cell size should be large enough
of at least 5 or more. When a contingency table contains one or more cells
with expected frequency of less than 5, this requirement may be met by
combining two rows or columns before calculating χ2. We must combine these
cells in order to get an expected frequency of 5 or more in each cell. This
practice is also known as grouping the frequencies together. But in doing this,
we reduce the number of categories of data and will gain less information from
contingency table. In addition, we also lose 1 or more degrees of freedom due
to pooling. With this practice, it should be noted that the number of freedom is
determined with the number of classes after the regrouping. In a special case 2
× 2 contingency table, the degree of freedom is 1. Suppose in any cell the
frequency is less than 5, we may be tempted to apply the pooling method
which results in 0 degrees of freedom (due to loss of 1 df ) which is
meaningless. When the assumption of cell frequency of minimum 5 is not
maintained in case of a 2 × 2 contingency table we apply Yates correction. You
will learn about Yates correction in section 17.7. Let us take an illustration to
understand the cell pooling method.

Illustration 7
A company marketing manager wishes to determine whether there are any
significant differences between regions in terms of a new product acceptance.
The following is the data obtained from interviewing a sample of 190
consumers.
Degree of
acceptance

South

North

Region
East

West

Total

Strong

30

25

20

30

105

Moderate

15

15

20

20

70

Poor

5

10

0

0

15

Total

50

50

40

50

190

Calculate the chi-square statistic. Test the independence of the two attributes at
0.05 degrees of freedom.

Solution: In this illustration, the null and alternate hypotheses are:
H0: The product acceptance is independent of the region of the consumer.
H1: The product acceptance is not independent of the region of the consumer.
We are given the observed frequencies in the problem. The following table
gives the calculated expected frequencies.

146

Degree of
acceptance

South

Region
East

West

Total

Strong

27.63

27.63

22.11

27.63

105

Moderate

18.42

18.42

14.74

18.42

70

Poor

3.95

3.95

3.16

3.95

15

Total

50.00

50.00

40.00

50.00

190

North

Since the expected frequencies (cell values) in the third row are less than 5 we
pool the third row with the second row of both observed frequencies and
expected frequencies. The revised observed frequency and expected frequency
tables are given below.
Degree of
acceptance

South

North

Region
East

West

Total

Strong

30

25

20

30

105

Moderate and
poor

20

25

20

20

85

Total

50

50

40

50

190

Degree of
acceptance

South

North

Region
East

West

Total

Strong

27.63

27.63

22.11

27.63

105

Moderate and
poor

22.37

22.37

17.89

22.37

85

Total

50

50

40

50

Chi-Square Test

190

Now we rearrange the data on observed and expected frequencies and
calculate the χ2 value. The following table gives the calculated χ2 value.
(Row, Column)

Observed
Expected
(O i-E i) (O i–E i) 2 (O i–E i) 2/E i
frequencies(Oi) frequencies(Ei)

(1,1)

30

27.63

2.37

5.6169

0.2033

(2,1)

20

22.37

–2.37

5.6169

0.2511

(1,2)

25

27.63

–2.63

6.9169

0.2503

(2,2)

25

22.37

2.63

6.9169

0.3092

(1,3)

20

22.11

–2.11

4.4521

0.2014

(2,3)

20

17.89

2.11

4.4521

0.2489

(1,4)

30

27.63

2.37

5.6169

0.2033

(2,4)

20

22.37

-2.37

5.6169

0.2511
χ2 =1.9185

Since we have a 2 × 4 contingency table, the degrees of freedom will be (r–1)
× (c–1) = (2–1) × (4–1) = 1× 3 = 3. At 3 degree of freedom and 0.05
significance level the table value (from Appendix Table-4) is 7.815. Since the
calculated χ2 value (1.9185) is less than table value of χ2 (7.815) we accept
the null hypothesis and conclude that the product acceptance is independent of
the region of the consumer.

Illustration 8
The following table gives the number of typing errors per page in a 40 page
report. Test whether the typing errors per page have a Poisson distribution with
mean (λ) number of errors is 3.0.
147

Probability and Hypothesis
Testing

No. of typing
0
errors per page

1

2

3

4

5

6

7

8

9

10 or
more

No. of pages

9

6

8

4

3

2

1

1

0

1

5

i) Construct a table of expected frequencies using Poisson probabilities with λ = 3.
ii) Calculate the χ2 statistic and give your conclusions about the null hypothesis (take
the level of significance as 0.01).

Solution: For the above problem we formulate the following hypothesis.
H0: The number of typing errors per page follows a Poisson probability distribution.
H1: The number of typing errors per page does not follow a Poisson probability
distribution.
As usual the expected frequencies are determined by multiplying the probability
values (in this case Poisson probability) by the total sample size of observed
frequencies. Table 17.3 provides the Poisson probability values. For λ = 3.0 and
for different x values we can directly read the probability values. For example
for λ = 3.0 and x = 0 the Poisson probability value is 0.0498. The following
table gives the calculated expected frequencies.
No. of typing
errors per page(x)

Poisson probability

Expected frequency
Ei = (2) × 40

(1)

(2)

(3)

0

0.0498

1.99

1

0.1494

5.98

2

0.2240

8.96

3

0.2240

8.96

4

0.1680

6.72

5

0.1008

4.03

6

0.0504

2.02

7

0.0216

0.86

8

0.0081

0.32

9

0.0027

0.11

10 or more

0.0012

0.05

Total

1.0000

7.97

14.11

40

Since the expected frequencies of the first row are less than 5, we pool first
and second rows of observed and expected frequencies. Similarly, the expected
frequencies of the last 6 rows (with 5,6,7,8,9, and 10 or more errors) are less
than 5. Therefore we pool these rows with the row having the expected typing
errors as 4 or more.
As usual we use the following formula for calculating the chi-square (χ2) value.
2

χ =∑

148

(O i − E i ) 2
Ei

The following table gives the calculated χ2 value after pooling cells

Chi-Square Test

No. of typing
errors per
page (x)

Observed Expected
frequen- frequencies (Oi) cies (Ei)

(O i–E i)

(O i–E i) 2

(O i–E i) 2/E i

1 or less

14

7.97

6.032

36.39

4.5664

2

6

8.96

–2.960

8.76

0.9779

3

8

8.96

–0.960

0.92

0.1029

4 or more

12

14.11

–2.112

4.46

0.3161
χ2 = 5.9632

Since n = 4, the number of degrees of freedom will be n–1 = 4–1 = 3 and we
are given a = 0.01 as the level of significance. From Table 4 the table value of
χ2 for 3 degrees of freedom and a = 0.01 is 11.345. Since the calculated
value of χ2 = 5.9632 which is less than the table value of χ2 =11.345, we
accept the null hypothesis (H0) and conclude that the typing errors follow a
Poisson probability distribution with l = 3.0.

17.7 YATES CORRECTION
Yates correction is also called Yates correction for continuity. In a 2 x 2
contingency table the degrees of freedom is 1. If any one of the expected cell
frequency is less than 5, then use of pooling method (explained in section 17.6)
may result in 0 degree of freedom due to loss of 1 degree of freedom in
pooling which is meaning less. More over, it is not valid to perform the chi
square test if any one or more of the expected frequencies is less than 5 (as
explained in section 17.5). Therefore, if any one or more of the expected
frequencies in a 2 × 2 contingency table is less than 5, the Yates correction is
applied. This was proposed by F. Yates, who was an English mathematician.
Suppose for a 2 × 2 contingency table, the four cell values a, b, c and d are
arranged in the following order.
a

b

c

d

The Yates formula for corrected chi square is given by
2

n

n  ad − bc − 
2

χ2 =
(a + b)(c + d )(a + c)(b + d)

Illustration 9
Suppose we have the following data on the consumer preference of a new
product collected from the people living in north and south India.
South India

North India

Row total

Number of consumers who
prefer present product

4

51

55

Number of consumers who
prefer new product

14

38

52

Column total:

18

89

107
149

Probability and Hypothesis
Testing

Do the data suggest that the new product is preferred by the people
independent of their region? Use a = 0.05.

Solution: Suppose we symbolise the true proportions of people who prefer
the new product as :
Ps = proportion of south Indians who prefer the new product
PN = Proportion of north Indians who prefer the new product
We state the null hypothesis (H0) and alternative hypothesis (H1)as:
H0: PS = PN (the proportion of people who prefer new product among south and north
India are the same).
H1: PS ≠ PN (the proportion of people who prefer new product among south and north
India are not the same).
In this illustration, (i) the sample size (n) = 107 (ii) the cell values are: a = 4,
b = 51, c = 14, d = 38, (iii) The corresponding row totals are: (a + b) = 55 and
(c + d) = 52, and column totals are (a + c) = 18 and (b + d) = 89.
Since one of the cell frequency is less than 5 (a = 4) we apply Yates
correction to the chi-square test.
2

n

n  ad − b c − 
2

χ2 =
(a + b) (c + d) (a + c) (b + d)

χ

2

107 

107  | 4 × 38 − 51 × 14 | −
2 

=
55 × 52 × 18 × 89

2

=

107 [ | 152 − 714 | − 53 . 5 ] 2
4581720

χ2 =

107[562 − 53.5]2
4581720

=

107 [508 .5] 2
4581720

χ2 =

107 × 258572
4581720

=

27667204
4581720

∴ χ 2 = 6.0386
The table value for degrees of freedom (2–1) (2–1) = 1 and significance level
∝ = 0.05 is 3.841. Since calculated value of chi-square is 6.0386 which is
greater than the table value we can reject H0 and accept H1 and conclude that
the preference for the new product is not independent of the geographical
region.
It may be observed that when N is large, Yates correction will not make much
difference in the chi square value. However, if N is small, the implication of
Yates correction may overstate the probability.

17.8

LIMITATIONS OF CHI-SQUARE TEST

In order to prevent the misapplication of the χ2 test, one has to keep the
following limitations of the test in mind:
a) As explained in section 17.5 (conditions for applying chi square test), the chi square
test is highly sensitive to the sample size. As sample size increases, absolute
differences become a smaller and smaller proportion of expected value. This means
150

that a reasonably strong association may not come up as significant if the sample
size is small. Conversely, in a large sample, we may find statistical significance
when the findings are small and insignificant. That is, the findings are not substantially
significant, although they are statistically significant.

Chi-Square Test

b) Chi-square test is also sensitive to small frequencies in the cells of contingency
table. Generally, when the expected frequency in a cell of a table is less than 5,
chi-square can lead to erroneous conclusions as explained in section 17.5. The
rule of thumb here is that if either (i) an expected value in a cell in a 2 × 2 contingency
table is less than 5 or (ii) the expected values of more than 20% of the cells in a
greater than 2 × 2 contingency table are less than 5, then chi square test should not
be applied. If at all a chi-square test is applied then appropriately either Yates
correction or cell pooling should also be applied.
c) No directional hypothesis is assumed in chi-square test. Chi-square tests the
hypothesis that two attributes/variables are related only by chance. That is if a
significant relationship is found, this is not equivalent to establishing the researchers’
hypothesis that attribute A causes attribute B or attribute B causes attribute A.

Self Assessment Exercise B
1) While calculating the expected frequencies of a chi-square distribution it was found
that some of the cells of expected frequencies have value below 5. Therefore,
some of the cells are pooled. The following statements tell you the size of the
contingency table before pooling and the rows/columns pooled. Determine the
number of degrees of freedom.
a) 5 × 4 contingency table. First two and last two rows are pooled.
b) 4 × 6 contingency table. First two and last two columns are pooled.
c) 6 × 3 contingency table. First two rows are pooled. 4th, 5th, and 6th rows
are pooled.
..................................................................................................................
..................................................................................................................
..................................................................................................................
2) What is the table value of chi-square for goodness-of-fit if there are:
a) 8 degrees of freedom and the significance level is 1%.
b) 13 degrees of freedom and the significance level is 5%.
c) 16 degrees of freedom and the significance level is 0.10%.
d) 6 degrees of freedom and the significance level is 0.20%.
..................................................................................................................
3) a) The following data is an observed frequency distribution. Assuming that
the data follows a Poisson distribution with l=2.5.
i) calculate Poisson probabilities and expected values, ii) calculate
chi square value, and iii) at 0.05 level of significance can we
conclude that the data follow a poisson distribution with l = 2.5.
No. of Telephone
calls per minute

0

1

2

3

4

5 or more

Frequency of occurrences

6

30

41

52

12

9
151

Probability and Hypothesis
Testing

..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................

17.9

LET US SUM UP

There are several applications of chi-square distribution, some of which we
have studied in this Unit. These are (i) to test the goodness-of-fit, and (ii) to
test the independence of attributes. The chi-square distribution is known by its
only parameter – number of degrees of freedom. Like student t distribution
there is a separate chi-square distribution for each number of degrees of
freedom.
The chi-square test for testing the goodness-of-fit establishes whether the
sample data supports the assumption that a particular distribution applies to the
parent population. It should be noted that the statistical procedures are based on
some assumptions such as normal distribution of population. A chi-square
procedure allows for testing the null hypothesis that a particular distribution
applies. We also use chi-square test whether to test whether the classification
criteria are independent or not.
When performing chi-square test using contingency tables, it is assumed that all
cell frequencies are a minimum of 5. If this assumption is not met we may use
the pooling method but then there is a loss of information when we use this
method. In a 2 × 2 contingency table if one or more cell frequencies are less
than 5 we should apply Yates correction for computing the chi-square value.
In a chi-square test for goodness of-fit, the degrees of freedom are number of
categories – 1 (n–1). In a chi-square test for independence of attributes, the
degrees of freedom are (number of rows–1) × (number of columns–1). That is,
(r–1) × (c–1).

17.10 KEY WORDS
Adequate Cell Sizes: To avoid making incorrect inferences from chi-square
tests we follow a general rule that the expected frequency in any cell should be
a minimum of 7.
Cells Pooling: When a contingency table contains one or more cells with
expected frequency less than 5, we combine two rows or columns before
calculating χ2. We combine these cells in order to get an expected frequency of
5 or more in each cell.
152

Chi-Square Distribution: A kind of probability distribution, differentiated by

their degree of freedom, used to test a number of different hypotheses about
variances, proportions and distributional goodness of fit.

Chi-Square Test

Expected Frequencies: The hypothetical data in the cells are called as
expected frequencies.
Goodness of Fit: The chi-square test procedure used for the validation of our
assumption about the probability distribution is called goodness of fit.
Observed Frequencies: The actual cell frequencies are called observed
frequencies.
Yates Correction: If any one or more of the expected frequencies in a 2 × 2
contingencies table is less than 5, the Yates correction is applied.

17.11 ANSWERS TO SELF ASSESSMENT
EXERCISES
A) 1. a

i) H0: The preference for the type of car among people is independent
of their sex.
ii) degrees of freedom: 6
iii) χ2 (table value): 12.592
iv) Conclusion: Accept H0.

1. b

i)

H0: The income distribution and preference for type of house are
independently distributed.

ii) degrees of freedom: 9
iii) χ2 (table value): 21.666
iv) Conclusion: Reject H0.
1. c

i)

H0: The attitude towards going to a movie or for shopping is
independent of the sex.

ii) degrees of freedom: 1
iii) χ2 (table value): 6.635
iv) Conclusion: Reject H0.
1. d

i)

H0: The voters educational level and their political affiliation are
independent of each other.

ii) degrees of freedom: 9
iii) c2 (table value): 14.684
iv) Conclusion: Accept H0.
2. a) 25,

b) 6,

c) 8,

d) 21.

153

Probability and Hypothesis
Testing

3. a.

(Row,
Column)

Observed
Expected (Oi - Ei)
frequency frequency
(Ei)
(O i)

(Oi - Ei)2 (Oi - Ei)2/Ei

(1,1)

50

52.5

–2.5

6.25

0.1190

(1,2)

55

52.5

2.5

6.25

0.1190

(1,3)

45

52.5

–7.5

56.25

1.0714

(1,4)

60

52.5

7.5

56.25

1.0714

(2,1)

50

47.5

2.5

6.25

0.1316

(2,2)

45

47.5

7.5

56.25

1.1842

(2,3)

55

47.5

–2.5

6.25

0.1316

(2,4)

40

47.5

12.5

156.25

3.2895

Total

400

χ2 = 7.1178

400

3. b. H0: The preference for the brand is distributed independent of the consumers’
education level.
3. c. Table value χ2 at 3 d.f and α = 0.05 is 7.815. Since calculated value (7.1178)
is less than the table value of χ2 (7.815), we accept the H0.
B) 1.

a) 6,

b) 9,

c) 4

2.

a) 20.090,

b)22.362,

c) 23.542,

3.

i) Poisson probabilities and expected values
No. of repairs
per car (x)
(1)
0
1
2
3
4
5 or more

154

d) 8.558

Poisson probability

Expected frequency
Ei =(2)x150

(2)
0.0498
0.1494
0.2240
0.2240
0.1680
0.1848

(3)
7.47
22.41
33.60
33.60
25.20
27.72

3. ii) chi-square value

Chi-Square Test

No. of
Observed Expected
Telephone calls frequency frequency
per minute
(O i)
(Ei)

(Oi-Ei)

(Oi-Ei)2

(Oi-Ei)2/Ei

0

6

7.47

–1.47

2.16

0.2893

1

30

22.41

7.59

57.61

2.5706

2

41

33.60

7.40

54.76

1.6298

3

52

33.60

18.40

338.56

10.0762

4

12

25.20

–13.20

174.24

6.9143

9

27.72

–18.72

350.44

12.6421

5 or more
Total150

χ2=34.1222

150

3.iii) At 0.05 significance level and 4 degrees of freedom the table value is 9.488.
Since the calculated chi-square value is greater than the table value we reject
the null hypothesis that the frequency of telephone calls follows Poisson
distribution.

17.12 TERMINAL QUESTIONS/EXERCISES
1) Why do we use chi-square test?
2) What do you mean by expected frequencies in (a) chi-square test for testing
independence of attributes, and (b) chi-square test for testing goodness-of-fit?
Briefly explain the procedure you follow in calculating the expected values in
each of the above situations.
3) Explain the conditions for applying chi-square test.
4) What are the limitations for applying chi-square test?
5) When do you use Yates correction?
6) When do you pool rows or columns while applying chi-square test? What are its
limitations?
7) The following data provides information for 30 days on fatal accidents in a metro
city. Do the data suggest that the distribution of fatal accidents follow a Poisson
distribution? Take the level of significance as 0.05.
Fatal accidents per day

0

1

2

3

4 or more

Frequency

4

8

10

6

2

8) Below is an observed frequency distribution.
Marks
range

Under 40

40 and
under 50

50 and
under 60

60 and
under 75

75 and
90 and
under 90 above

No. of
students

9

20

65

34

14

8

At 0.01 significance level, the null hypothesis is that the data is from normal
distribution with a mean of 10 and a standard deviation of 2. What are your
conclusions?

155

Probability and Hypothesis
Testing

9) The following table gives the number of telephone calls attended by a credit card
information attendant.
Day

Sunday Monday Tuesday Wednesday Thursday Friday Saturday

No.
45
of calls
attended

50

24

36

33

27

42

Test whether the telephone calls are uniformly distributed? Use 0.10
significance level.
10)The following data gives preference of car makes by type of customer.
Type of
customer

Car make
Maruti 800 Maruti Zen

Honda

Tata Indica Total

Single man

350

200

150

50

750

Single woman

100

150

100

80

430

Married man

300

150

120

120

690

Married woman

150

100

80

50

380

Total

900

600

450

300

2250

(a) Test the independence of the two attributes. Use 0.05 level of significance.
(b) Draw your conclusions.
11) A bath soap manufacturer introduced a new brand of soap in 4 colours. The
following data gives information on the consumer preference of the brand.
Consumer

Bath soap colour

rating

Red

Green

Brown

Yellow

Total

Excellent

30

20

20

30

100

Good

20

10

20

30

80

Fair

20

10

10

30

70

Poor

10

45

35

10

100

Total

80

85

85

100

350

From the above data:
a) Compute the χ2 value,
b) State the null hypothesis, and
c) Draw your inferences.
Note: These questions/exercises will help you to understand the unit better.
Try to write answers for them. But do not submit your answers to the
university for assessment. These are for your practice only.
156

17.13

FURTHER READING

Chi-Square Test

A number of good text books are available on the topics dealth within this unit. The
following books may be used for more indepth study.
1) Kothari, C.R.(1985) Research Methodology Methods and Techniques, Wiley
Eastern, New Delhi.
2) Levin, R.I. and D.S. Rubin. (1999) Statistics for Management, Prentice-Hall
of India, New Delhi
3) Mustafi, C.K.(1981) Statistical Methods in Managerial Decisions,
Macmillan, New Delhi.
4) Chandan, J.S., Statistics for Business and Economics, Vikas Publishing
House Pvt Ltd New Delhi.
5) Zikmund, William G. (1988) Business Research Methods, The Dryden
Press, New York.

157

158

0
1
2
0
1
2
3
0
1
2
3
4
0
1
2
3
4
5
0
1
2
3
4
5
6
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
8

2

7

8

6

5

4

3

r

n

.980
.020
.000
.970
.029
.000
.000
.961
.039
.001
.000
.000
.951
.048
.001
.000
.000
.000
.941
.057
.001
.000
.000
.000
.000
.932
.066
.002
.000
.000
.000
.000
.000
.923
.075
.003
.000
.000
.000
.000
.000
.000

.01

.902
.095
.002
.857
.135
.007
.000
.815
.171
.014
.000
.000
.774
.204
.021
.001
.000
.000
.735
.232
.031
.002
.000
.000
.000
.698
.257
.041
.004
.000
.000
.000
.000
.663
.279
.051
.005
.000
.000
.000
.000
.000

.05

.810
.180
.010
.729
.243
.027
.001
.656
.292
.049
.004
.000
.590
.328
.073
.008
.000
.000
.531
.354
.098
.015
.001
.000
.000
.478
.372
.124
.023
.003
.000
.000
.000
.430
.383
.149
.033
.005
.000
.000
.000
.000

.10

.723
.255
.023
.614
.325
.057
.003
.522
.368
.098
.011
.001
.444
.392
.138
.024
.002
.000
.377
.399
.176
.042
.006
.000
.000
.321
.396
.210
.062
.011
.001
.000
.000
.272
.385
.238
.084
:018
.003
.000
.000
.000

.15
.640
.320
.040
.512
.384
.096
.008
.410
.410
.154
.026
.002
.328
.410
.205
.051
.006
.000
.262
.393
.246
.082
.015
.002
.000
.210
.367
.275
.115
.029
.004
.000
.000
.168
.336
.294
.147
.046
.009
.001
.000
.000

.20
.563
.375
.063
.422
.422
.141
.016
.316
.422
.211
.047
.004
.237
.396
.264
.088
.015
.001
.178
.356
.297
.132
.033
.004
.000
.133
.311
.311
.173
.058
.012
.001
.000
.100
.267
.311
.208
.087
.023
.004
.000
000

.25
.490
.420
.090
.343
.441
.189
.027
.240
.412
.265
.076
.008
.168
.360
.309
.132
.028
.002
.118
.303
.324
.185
.060
.010
.001
.082
.247
.318
.227
.097
.025
.004
.000
.058
.198
.296
.254
.136
.047
.010
.001
.000

.30
.423
.455
.123
.275
.444
.239
.043
.179
.384
.311
.112
.015
.116
.312
.336
.181
.049
.005
.075
.244
.328
.236
.095
.020
.002
.049
.185
.299
.268
.144
.047
.008
.001
.032
.137
.259
.279
.188
.081
.022
.003
.000

.35
.360
.480
.160
.216
.432
.288
.064
.130
.346
.346
.154
.026
.078
.259
.346
.230
.077
.010
.047
.187
.311
.276
.138
.037
.004
.028
.131
.261
.290
.194
.077
.017
.002
.017
.090
.209
.279
.232
.124
.041
.008
.001

.40

Appendix Table-1

.303
.495
.203
.166
.408
.334
.091
.092
.300
.368
.200
.041
.050
.206
.337
.276
.113
.019
.028
.136
.278
.303
.186
.061
.008
.015
.087
.214
.292
.239
.117
.032
.004
.008
.055
.157
.257
.263
.172
.070
.016
.002

.45
.250
.500
.250
.125
.375
.375
.125
.062
.250
.375
.250
.062
.031
.156
.312
.312
.156
.031
.016
.094
.234
.312
.234
.094
.016
.008
.055
.164
.273
.273
.164
.055
.008
.004
.031
.109
.219
.273
.219
.109
.031
.004

.50

p
.203
.495
.303
.091
.334
.408
.166
.041
.200
.368
.300
.092
.019
.113
.276
.337
.206
.050
.008
.061
.186
.303
.278
.136
.028
.004
.032
.117
.239
.292
.214
.087
.015
.002
.016
.070
.172
.263
.257
.157
.055
.008

.55

Binomial Probabilities

.160
.480
.360
.064
.288
.432
.216
.026
.154
.346
.346
.130
.010
.077
.230
.346
.259
.078
.004
.037
.138
.276
.311
.187
.047
.002
.017
.077
.194
.290
.261
.131
.028
.001
.008
.041
.124
.232
.279
.209
.090
.017

.60
.123
.455
.423
.043
.239
.444
.275
.015
.112
.311
.384
.179
.005
.049
.181
.336
.312
.116
.002
.020
.095
.236
.328
.244
.075
.001
.008
.047
.144
;268
.299
.185
.049
.000
.003
.022
.081
.188
.279
.259
.137
.032

.65
.090
.420
.490
.027
.189
.441
.343
.008
.076
.265
.412
.240
.002
.028
.132
.309
.360
.168
.001
.010
.060
.185
.324
.303
.118
.000
.004
.025
.097
.227
.318
.247
.082
.000
.001
.010
.047
.136
.254
.296
.198
.058

.70
.063
.375
.563
.016
.141
.422
.422
.004
.047
.211
.422
.316
.001
.015
.088
.264
.396
.237
.000
.004
.033
.132
.297
.356
.178
.000
.001
.012
.058
.173
.311
.311
.133
.000
.000
.004
.023
.087
.208
.311
.267
.100

.75
.040
.320
.640
.008
.096
.384
.512
.002
.026
.154
.410
.410
.000
.006
.051
.205
.410
.328
.000
.002
.015
.082
.246
.393
.262
.000
.000
.004
.029
.115
.275
.367
.210
.000
.000
.001
.009
.046
.147
.294
.336
.168

.80
.023
.255
.723
.003
.057
.325
.614
.001
.011
.098
.368
.522
.000
.002
.024
.138
.392
.444
.000
.000
.006
.042
.176
.399
.377
.000
.000
.001
.011
.062
.210
.396
.321
.000
.000
.000
.003
.018
.084
.238
.385
.272

.85
.010
.180
.810
.001
.027
.243
.729
.000
.004
.049
.292
.656
.000
.000
.008
.073
.328
.590
.000
.000
.001
.015
.098
.354
.531
.000
.000
.000
.003
.023
.124
.372
.478
.000
.000
.000
.000
.005
.033
.149
.383
.430

.90

.002
.095
.902
.00
.007
.135
.857
.000
.000
.014
.171
.815
.000
.000
.001
.021
.204
.774
.000
.000
.000
.002
.031
.232
.735
.000
.000
.000
.000
.004
.041
.257
.698
.000
.000
.000
.000
.000
.005
.051
.279
.663

.95

Probability and Hypothesis
Testing

0
1
2
3
4
5
6
7
8
9
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
11
0
1
2
3
4
5
6

9

12

11

10

r

n

.914
.083
.003
.000
.000
.000
.000
.000
.000
.000
.904
.091
.004
.000
.000
.000
.000
.000
.000
.000
.000
.895
.099
.005
.000
.000
.000
.000
.000
.000
.000
.000
.000
.886
.107
.006
.000
.000
.000
.000

.01

.630
.299
.063
.008
.001
.000
.000
.000
.000
.000
.599
.315
.075
.010
.001
.000
.000
.000
.000
.000
.000
.569
.329
.087
.014
.001
.000
.000
.000
.000
.000
.000
.000
.540
.341
.099
.017
.002
.000
.000

.05

.387
.387
.172
.045
.007
.001
.000
.000
.000
.000
.349
.387
.194
.057
.011
.001
.000
.000
.000
.000
.000
.314
.384
.213
.071
.016
.002
.000
.000
.000
.000
.000
.000
.282
.377
.230
.085
.021
.004
.000

.10

.232
.368
.260
.107
.028
.005
.001
.000
.000
.000
.197
.347
.276
.130
.040
.008
.001
.000
.000
.000
.000
.167
.325
.287
.152
.054
.013
.002
.000
.000
.000
.000
.000
.142
.301
.292
.172
.068
.019
.004

.15

.134
.302
.302
.176
.066
.017
.003
.000
.000
.000
.107
.268
.302
.201
.088
.026
.006
.001
.000
.000
.000
.086
.236
.295
.221
.111
.039
.010
.002
.000
.000
.000
.000
.069
.206
.283
.236
.133
.053
.016

.20
.075
.225
.300
.234
.117
.039
.009
.001
.000
.000
.056
.188
.282
.250
.146
.058
.016
.003
.000
.000
.000
.042
.155
.258
.258
.172
.080
.027
.006
.001
.000
.000
.000
.032
.127
.232
.258
.194
.103
.040

.25
.040
.156
.267
.267
.172
.074
.021
.004
.000
.000
.028
.121
.233
.267
.200
.103
.037
.009
.001
.000
.000
.020
.093
.200
.257
.220
.132
.057
.017
.004
.001
.000
.000
.014
.071
.168
.240
.231
.158
.079

.30
.021
.100
.216
.272
.219
.118
.042
.010
.001
.000
.014
.072
.176
.252
.238
.154
.069
.021
.004
.000
.000
.009
.052
.140
.225
.243
.183
.099
.038
.010
.002
.000
.000
.006
.037
.109
.195
.237
.204
.128

.35
.010
.060
.161
.251
.251
.167
.074
.021
.004
.000
.006
.040
.121
.215
.251
.201
.111
.042
.011
.002
.000
.004
.027
.089
.177
.236
.221
.147
.070
.023
.005
.001
.000
.002
.017
.064
.142
.213
.227
.177

.40
.005
.034
.111
.212
.260
.213
.116
.041
.008
.001
.003
.021
.076
.166
.238
.234
.160
.075
.023
.004
.000
.001
.013
.051
.126
.206
.236
.193
.113
.046
.013
.002
.000
.001
.008
.034
.092
.170
.223
.212

.45

p

.002
.018
.070
.164
.246
.246
.164
.070
.018
.002
.001
.010
.044
.117
.205
.246
.205
.117
.044
.010
.001
.000
.005
.027
.081
.161
.226
.226
.161
.081
.027
.005
.000
.000
.003
.016
.054
.121
.193
.226

.50
.001
.008
.041
.116
.213
.260
.212
.111
.034
.005
.000
.004
.023
.075
.160
.234
.238
.166
.076
.021
.003
.000
.002
.013
.046
.113
.193
.236
.206
.126
.051
.013
.001
.000
.001
.007
.028
.076
.149
.212

.55

.60
.000
.004
.021
.074
.167
.251
.251
.161
.060
.010
.000
.002
.011
.042
.111
.201
.251
.215
.121
.040
.006
.000
.001
.005
.023
.070
.147
.221
.236
.177
.089
.027
.004
.000
.000
.002
.012
.042
.101
.177

Appendix Table-1 Binomial Probabilities (continued)

.000
.001
.010
.042
.118
.219
.272
.216
.100
.021
.000
.000
.004
.021
.069
.154
.238
.252
.176
.072
.014
.000
.000
.002
.010
.038
.099
.183
.243
.225
.140
.052
.009
.000
.000
.001
.005
.020
.059
.128

.65
.000
.000
.004
.021
.074
.172
.267
.267
.156
.040
.000
.000
.001
.009
.037
.103
.200
.267
.233
.121
.028
.000
.000
.001
.004
.017
.057
.132
.220
.257
.200
.093
.020
.000
.000
.000
.001
.008
.029
.079

.70
.000
.000
.001
.009
.039
.117
.234
.300
.225
.075
.000
.000
.000
.003
.016
.058
.146
.250
.282
.188
.056
.000
.000
.000
.001
.006
.027
.080
.172
.258
.258
.155
.042
.000
.000
.000
.000
.002
.011
.040

.75
.000
.000
.000
.003
.017
.066
.176
.302
.302
.134
.000
.000
.000
.001
.006
.026
.088
.201
.302
.268
.107
.000
.000
.000
.000
.002
.010
.039
.111
.221
.295
.236
.086
.000
.000
.000
.000
.001
.003
.016

.80
.000
.000
.000
.001
.005
.028
.107
.260
.368
.232
.000
.000
.000
.000
.001
.008
.040
.130
.276
.347
.197
.000
.000
.000
.000
.000
.002
.013
.054
.152
.287
.325
.167
.000
.000
.000
.000
.000
.001
.004

.85
.000
.000
.000
.000
.001
.007
.045
.172
.387
.387
.000
.000
.000
.000
.000
.001
.011
.057
.194
.387
.349
.000
.000
.000
.000
.000
.000
.002
.016
.071
.213
.384
.314
.000
.000
.000
.000
.000
.000
.000

.90
.000
.000
.000
.000
.000
.001
.008
.063
.299
.630
.000
.000
.000
.000
.000
.000
.001
.010
.07.
.315
.599
.000
.000
.000
.000
.000
.000
.000
.001
.014
.087
.329
.569
.000
.000
.000
.000
.000
.000
.000

.95

Chi-Square Test

159

n

15

160

.01

.000
.000
.000
.000
.000
.000
.860
.130
.009
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000

r

7
8
9
10
11
12
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

.000
.000
.000
.000
.000
.000
.463
.366
.135
.031
.005
.001
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000

.05

.000
.000
.000
.000
.000
.000
.206
.343
.267
.129
.043
.010
.002
.000
.000
.000
.000
.000
.000
.000
.000
.000

.10

.001
.000
.000
.000
.000
.000
.087
.231
.286
.218
.116
.045
.013
.003
.001
.000
.000
.000
.000
.000
.000
.000

.15

.003
.001
.000
.000
.000
.000
.035
.132
.231
.250
.188
.103
.043
.014
.003
.001
.000
.000
.000
.000
.000
.000

.20
.011
.002
.000
.000
.000
.000
.013
.067
.156
.225
.225
.165
.092
.039
.013
.003
.001
.000
.000
.000
.000
.000

.25
.029
.008
.001
.000
.000
.000
.005
.031
.092
.170
.219
.206
.147
.081
.035
.012
.003
.001
.000
.000
.000
.000

.30
.059
.020
.005
.001
.000
.000
.002
.013
.048
.111
.179
.212
.191
.132
.071
.030
.010
.002
.000
.000
.000
.000

.35
.101
.042
.012
.002
.000
.000
.000
.005
.022
.063
.127
.186
.207
.177
.118
.061
.024
.007
.002
.000
.000
.000

.40
.149
.076
.028
.007
.001
.000
.000
.002
.009
.032
.078
.140
.191
.201
.165
.105
.051
.019
.005
.001
.000
.000

.45

p

.193
.121
.054
.016
.003
.000
.000
.000
.003
.014
.042
.092
.153
.196
.196
.153
.092
.042
.014
.003
.000
.000

.50
.223
.170
.092
.034
.008
.001
.000
.000
.001
.005
.019
.051
.105
.165
.201
.191
.140
.078
.032
.009
.002
.000

.55

.60
.227
.213
.142
.064
.017
.002
.000
.000
.000
.002
.007
.024
.061
.118
.177
.207
.186
.127
.063
.022
.005
.000

Appendix Table-1 Binomial Probabilities (continued)

.204
.237
.195
.109
.037
.006
.000
.000
.000
.000
.002
.010
.030
.071
.132
.191
.212
.179
.111
.048
.013
.002

.65
.158
.231
.240
.168
.071
.014
.000
.000
.000
.000
.001
.003
.012
.035
.081
.147
.206
.219
.170
.092
.031
.005

.70

.103
.194
.258
.232
.127
.032
.000
.000
.000
.000
.000
.001
.003
.013
.039
.092
.165
.225
.225
.156
.067
.013

.75

.053
.133
.236
.283
.206
.069
.000
.000
.000
.000
.000
.000
.001
.003
.014
.043
.103
.188
.250
.231
.132
.035

.80

.019
.068
.172
.292
.301
.142
.000
.000
.000
.000
.000
.000
.000
.001
.003
.013
.045
.116
.218
.286
.231
.087

.85

.004
.021
.085
.230
.377
.282
.000
.000
.000
.000
.000
.000
.000
.000
.000
.002
.010
.043
.129
.267
.343
.206

.90

.000
.002
.017
.099
.341
.540
.000
.000
.000
.000
.000
.000
.000
.000
.000
.000
.001
.005
.031
.135
.366
.463

.95

Probability and Hypothesis
Testing

Chi-Square Test

Appendix Table-2 Direct Values for Determining Poisson Probabilities
For a given value of l, entry indicates the probability of obtaining a specified value of X.
µ
x

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

0
1
2
3
4
5
6
7

0.9048
0.0905
0.0045
0.0002
0.0000
0.0000
0.0000
0.0000

0.8187
0.1637
0.0164
0.0011
0.0001
0.0000
0.0000
0.0000

0.7408
0.2222
0.0333
0.0033
0.0003
0.0000
0.0000
0.0000

0.6703
0.2681
0.0536
0.0072
0.0007
0.0001
0.0000
0.0000

0.6065
0.3033
0.0758
0.0126
0.0016
0.0002
0.0000
0.0000

0.5488
0.3293
0.0688
0.0198
0.0030
0.0004
0.0000
0.0000

0.4966
0.3476
0.1217
0.0284
0.0050
0.0007
0.0001
0.0000

0.4493
0.3595
0.1438
0.0383
0.0077
0.0012
0.0002
0.0000

0.4066
0.3659
0.1647
0.0494
0.0111
0.0020
0.0003
0.0000

0.3679
0.3679
0.1839
0.0613
0.0153
0.0031
0.0005
0.0001

µ
x

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2.0

0
1
2
3
4
5
6
7
8
9

0.3329
0.3662
0.2014
0.0738
0.0203
0.0045
0.0008
0.0001
0.0000
0.0000

0.3012
0.3614
0.2169
0.0867
0.0260
0.0062
0.0012
0.0002
0.0000
0.0000

0.2725
0.3543
0.2303
0.0998
0.0324
0.0084
0.0018
0.0003
0.0001
0.0000

0.2466
0.3452
0.2417
0.1128
0.0395
0.0111
0.0026
0.0005
0.0001
0.0000

0.2231
0.3347
0.2510
0.1255
0.0471
0.0141
0.0035
0.0008
0.0001
0.0000

0.2019
0.3230
0.2584
0.1378
0.0551
0.0176
0.0047
0.0011
0.0002
0.0000

0.1827
0.3106
0.2640
0.1496
0.0636
0.0216
0.0061
0.0015
0.0003
0.0001

0.1653
0.2975
0.2678
0.1607
0.0723
0.0260
0.0078
0.0020
0.0005
0.0001

0.1496
0.2842
0.2700
0.1710
0.0812
0.0309
0.0098
0.0027
0.0006
0.0001

0.1353
0.2707
0.2707
0.1804
0.0902
0.0361
0.0120
0.0034
0.0009
0.0002

µ
x

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

3.0

0
1
2
3
4
5
6
7
8
9
10
11
12

0.1225
0.2572
0.2700
0.1890
0.0992
0.0417
0.0146
0.0044
0.0011
0.0003
0.0001
0.0000
0.0000

0.1108
0.2438
0.2681
0.1966
0.1082
0.0476
0.0174
0.0055
0.0015
0.0004
0.0001
0.0000
0.0000

0.1003
0.2306
0.2652
0.2033
0.1169
0.0538
0.0206
0.0068
0.0019
0.0005
0.0001
0.0000
0.0000

0.0907
0.2177
0.5613
0.2090
0.1254
0.0602
0.0241
0.0083
0.0025
0.0007
0.0002
0.0000
0.0000

0.0821
0.2052
0.2565
0.2138
0.1336
0.0668
0.0278
0.0099
0.0031
0.0009
0.0002
0.0000
0.0000

0.0743
0.1931
0.2510
0.2176
0.1414
0.0735
0.0319
0.0118
0.0038
0.0011
0.0003
0.0001
0.0000

0.0672
0.1815
0.2450
0.2205
0.1488
0.0804
0.0362
0.0139
0.0047
0.0014
0.0004
0.0001
0.0000

0.0608
0.1703
0.2384
0.2225
0.1557
0.0872
0.0407
0.0163
0.0057
0.0018
0.0005
0.0001
0.0000

0.0550
0.1596
0.2314
0.2237
0.1622
0.0940
0.0455
0.0188
0.0068
0.0022
0.0006
0.0002
0.0000

0.0498
0.1494
0.2240
0.2240
0.1680
0.1008
0.0504
0.0216
0.0081
0.0027
0.0008
0.0002
0.0001

µ
x

3.1

3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

4.0

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14

0.0450
0.1397
0.2165
0.2237
0.1734
0.1075
0.0555
0.0246
0.0095
0.0033
0.0010
0.0003
0.0001
0.0000
0.0000

0.0408
0.1304
0.2087
0.2226
0.1781
0.1140
0.0608
0.0278
0.0111
0.0040
0.0013
0.0004
0.0001
0.0000
0.0000

0.0369
0.1217
0.2008
0.2209
0.1823
0.1203
0.0662
0.0312
0.0129
0.0047
0.0016
0.0005
0.0001
0.0000
0.0000

0.0334
0.1135
0.1929
0.2186
0.1858
0.1264
0.0716
0.0348
0.0148
0.0056
0.0019
0.0006
0.0002
0.0000
0.0000

0.0302
0.1057
0.1850
0.2158
0.1888
0.1322
0.0771
0.0385
0.0169
0.0066
0.0023
0.0007
0.0002
0.0001
0.0000

0.0273
0.0984
0.1771
0.2125
0.1912
0.1377
0.0826
0.0425
0.0191
0.0076
0.0028
0.0009
0.0003
0.0001
0.0000

0.0247
0.0915
0.1692
0.2087
0.1931
0.1429
0.0881
0.0466
0.0215
0.0089
0.0033
0.0011
0.0003
0.0001
0.0000

0.0224
0.0850
0.1615
0.2046
0.1944
0.1477
0.0936
0.0508
0.0241
0.0102
0.0039
0.0013
0.0004
0.0001
0.0000

0.0202
0.0789
0.1539
0.2001
0.1951
0.1522
0.0989
0.0551
0.0269
0.0116
0.0045
0.0016
0.0005
0.0002
0.0000

0.0183
0.0733
0.1465
0.1954
0.1954
0.1563
0.1042
0.0595
0.0298
0.0132
0.0053
0.0019
0.0006
0.0002
0.0001

161

Probability and Hypothesis
Testing

Appendix Table-2 Direct Values for Determining Poisson Probabilities (continued….)

µ
x

4.1

4.2

4.3

4.4

4.5

4.6

4.7

4.8

4.9

5.0

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

0.0166
0.0679
0.1393
0.1904
0.1951
0.1600
0.1093
0.0640
0.0328
0.0150
0.0061
0.0023
0.0008
0.0002
0.0001
0.0000

0.0150
0.0630
0.1323
0.1852
0.1944
0.1633
0.1143
0.0686
0.0360
0.0168
0.0071
0.0027
0.0009
0.0003
0.0001
0.0000

0.0136
0.0583
0.1254
0.1798
0.1933
0.1662
0.1191
0.0732
0.0393
0.0188
0.0081
0.0032
0.0011
0.0004
0.0001
0.0000

0.0123
0.0540
0.1188
0.1743
0.1917
0.1687
0.1237
0.0778
0.0428
0.0209
0.0092
0.0037
0.0014
0.0005
0.0001
0.0000

0.0111
0.0500
0.1125
0.1687
0.1898
0.1708
0.1281
0.0824
0.0463
0.0232
0.0104
0.0043
0.0016
0.0006
0.0002
0.0001

0.0101
0.0462
0.1063
0.1631
0.1875
0.1725
0.1323
0.0869
0.0500
0.0255
0.0118
0.0049
0.0019
0.0007
0.0002
0.0001

0.0091
0.0427
0.1005
0.1574
0.1849
0.1738
0.1362
0.0914
0.0537
0.0280
0.0132
0.0056
0.0022
0.0008
0.0003
0.0001

0.0082
0.0395
0.0948
0.1517
0.1820
0.1747
0.1398
0.0959
0.0575
0.0307
0.0147
0.0064
0.0026
0.0009
0.0003
0.0001

0.0074
0.0365
0.0894
0.1460
0.1789
0.1753
0.1432
0.1022
0.0614
0.0334
0.0164
0.0073
0.0030
0.0011
0.0004
0.0001

0.0067
0.0337
0.0842
0.1404
0.1755
0.1755
0.1462
0.1044
0.0653
0.0363
0.0181
0.0082
0.0034
0.0013
0.0005
0.000

µ
x

5.1

5.2

5.3

5.4

5.5

5.6

5.7

5.8

5.9

6.0

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

0.0061
0.0311
0.0793
0.1348
0.1719
0.1753
0.1490
0.1086
0.0692
0.0392
0.0200
0.0093
0.0039
0.0015
0.0006
0.0002
0.0001
0.0000

0.0055
0.0287
0.0746
0.1293
0.1681
0.1748
0.1515
0.1125
0.0731
0.0423
0.0220
0.0104
0.0045
0.0018
0.0007
0.0002
0.0001
0.0000

0.0050
0.0265
0.0701
0.1239
0.1641
0.1740
0.1537
0.1163
0.0771
0.0454
0.0241
0.0116
0.0051
0.0021
0.0008
0.0003
0.0001
0.0000

0.0045
0.0244
0.0659
0.1185
0.1600
0.1728
0.1555
0.1200
0.0810
0.0486
0.0262
0.0129
0.0058
0.0024
0.0009
0.0003
0.0001
0.0000

0.0041
0.0225
0.0618
0.1133
0.1558
0.1714
0.1571
0.1234
0.0849
0.0519
0.0285
0.0143
0.0065
0.0028
0.0011
0.0004
0.0001
0.0000

0.0037
0.0207
0.0580
0.1082
0.1515
0.1697
0.1584
0.1267
0.0887
0.0552
0.0309
0.0157
0.0073
0.0032
0.0013
0.0005
0.0002
0.0000

0.0033
0.0191
0.0544
0.1033
0.1472
0.1678
0.1594
0.1298
0.0925
0.0586
0.0334
0.0173
0.0082
0.0036
0.0015
0.0006
0.0002
0.0001

0.0030
0.0176
0.0509
0.0985
0.1428
0.1656
0.1601
0.1326
0.0962
0.0620
0.0359
0.0190
0.0092
0.0041
0.0017
0.0007
0.0002
0.0001

0.0027
0.0162
0.0477
0.0938
0.1383
0.1632
0.1605
0.1353
0.0998
0.0654
0.0386
0.0207
0.0102
0.0046
0.0019
0.0008
0.0003
0.0001

0.0025
0.0149
0.0446
0.0892
0.1339
0.1606
0.1606
0.1377
0.1033
0.0688
0.0413
0.0225
0.0113
0.0052
0.0022
0.0009
0.0003
0.0001

µ

162

x

6.1

6.2

6.3

6.4

6.5

6.6

6.7

6.8

6.9

7.0

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

0.0022
0.0137
0.0417
0.0848
0.1294
0.1579
0.1605
0.1399
0.1066
0.0723
0.0441
0.0245
0.0124
0.0058
0.0025
0.0010
0.0004
0.0001
0.0000
0.0000

0.0020
0.0126
0.0390
0.0806
0.1249
0.1549
0.1601
0.1418
0.1099
0.0757
0.0469
0.0265
0.0137
0.0065
0.0029
0.0012
0.0005
0.0002
0.0001
0.0000

0.0018
0.0116
0.0364
0.0765
0.1205
0.1519
0.1595
0.1435
0.1130
0.0791
0.0498
0.0285
0.0150
0.0073
0.0033
0.0014
0.0005
0.0002
0.0001
0.0000

0.0017
0.0106
0.0340
0.0726
0.1162
0.1487
0.1586
0.1450
0.1160
0.0825
0.0528
0.0307
0.0164
0.0081
0.0037
0.0016
0.0006
0.0002
0.0001
0.0000

0.0015
0.0098
0.0318
0.0688
0.1118
0.1454
0.1575
0.1462
0.1188
0.0858
0.0558
0.0330
0.0179
0.0089
0.0041
0.0018
0.0007
0.0003
0.0001
0.0000

0.0014
0.0090
0.0296
0.0652
0.1076
0.1420
0.1562
0.1472
0.1215
0.0891
0.0588
0.0353
0.0194
0.0098
0.0046
0.0020
0.0008
0.0003
0.0001
0.0000

0.0012
0.0082
0.0276
0.0617
0.1034
0.1385
0.1546
0.1480
0.1240
0.0923
0.0618
0.0377
0.0210
0.0108
0.0052
0.0023
0.0010
0.0004
0.0001
0.0000

0.0011
0.0076
0.0258
0.0584
0.0992
0.1349
0.1529
0.1486
0.1263
0.0954
0.0649
0.0401
0.0227
0.0119
0.0058
0.0026
0.0011
0.0004
0.0002
0.0001

0.0010
0.0070
0.0240
0.0552
0.0952
0.1314
0.1511
0.1489
0.1284
0.0985
0.0679
0.0426
0.0245
0.0130
0.0064
0.0029
0.0013
0.0005
0.0002
0.0001

0.0009
0.0064
0.0223
0.0521
0.0912
0.1277
0.1490
0.1490
0.1304
0.1014
0.0710
0.0452
0.0264
0.0142
0.0071
0.0033
0.0014
0.0006
0.0002
0.0001

Appendix Table-2 Direct Values for Determining Poisson Probabilities (continued….)

Chi-Square Test

µ
x

7.1

7.2

7.3

7.4

7.5

7.6

7.7

7.8

7.9

8.0

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

0.0008
0.0059
0.0208
0.0492
0.0874
0.1241
0.1468
0.1489
0.1321
0.1042
0.0740
0.0478
0.0283
0.0154
0.0078
0.0037
0.0016
0.0007
0.0003
0.0001
0.0000
0.0000

0.0007
0.0054
0.0194
0.0464
0.0836
0.1204
0.1445
0.1486
0.1337
0.1070
0.0770
0.0504
0.0303
0.0168
0.0086
0.0041
0.0019
0.0008
0.0003
0.0001
0.0000
0.0000

0.0007
0.0049
0.0180
0.0438
0.0799
0.1167
0.1420
0.1481
0.1351
0.1096
0.0800
0.0531
0.0323
0.0181
0.0095
0.0046
0.0021
0.0009
0.0004
0.0001
0.0001
0.0000

0.0006
0.0045
0.0167
0.0413
0.0764
0.1130
0.1394
0.1474
0.1363
0.1121
0.0829
0.0558
0.0344
0.0196
0.0104
0.0051
0.0024
0.0010
0.0004
0.0002
0.0001
0.0000

0.0006
0.0041
0.0156
0.0389
0.0729
0.1094
0.1367
0.1465
0.1373
0.1144
0.0858
0.0585
0.0366
0.0211
0.0113
0.0057
0.0026
0.0012
0.0005
0.0002
0.0001
0.0000

0.0005
0.0038
0.0145
0.0366
0.0696
0.1057
0.1339
0.1454
0.1382
0.1167
0.0887
0.0613
0.0388
0.0227
0.0123
0.0062
0.0030
0.0013
0.0006
0.0002
0.0001
0.0000

0.0005
0.0035
0.0134
0.0345
0.0663
0.1021
0.1311
0.1442
0.1388
0.1187
0.0914
0.0640
0.0411
0.0243
0.0134
0.0069
0.0033
0.0015
0.0006
0.0003
0.0001
0.0000

0.0004
0.0032
0.0125
0.0324
0.0632
0.0986
0.1282
0.1428
0.1392
0.1207
0.0941
0.0667
0.0434
0.0260
0.0145
0.0075
0.0037
0.0017
0.0007
0.0003
0.0001
0.0000

0.0004
0.0029
0.0116
0.0305
0.0602
0.0951
0.1252
0.1413
0.1395
0.1224
0.0967
0.0695
0.0457
0.0278
0.0157
0.0083
0.0041
0.0019
0.0008
0.0003
0.0001
0.0001

0.0003
0.0027
0.0107
0.0286
0.0573
0.0916
0.1221
0.1396
0.1396
0.1241
0.0993
0.0722
0.0481
0.0296
0.0169
0.0090
0.0045
0.0021
0.0009
0.0004
0.0002
0.0001

x
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

8.1
0.0003
0.0025
0.0100
0.0269
0.0544
0.0882
0.1191
0.1378
0.1395
0.1256
0.1017
0.0749
0.0505
0.0315
0.0182
0.0098
0.0050
0.0024
0.0011
0.0005
0.0002
0.0001
0.0000

8.2
0.0003
0.0023
0.0092
0.0252
0.0517
0.0849
0.1160
0.1358
0.1392
0.1269
0.1040
0.0776
0.0530
0.0334
0.0196
0.0107
0.0055
0.0026
0.0012
0.0005
0.0002
0.0001
0.0000

8.3
0.0002
0.0021
0.0086
0.0237
0.0491
0.0816
0.1128
0.1338
0.1388
0.1280
0.1063
0.0802
0.0555
0.0354
0.0210
0.0116
0.0060
0.0029
0.0014
0.0006
0.0002
0.0001
0.0000

8.4
0.0002
0.0019
0.0079
0.0222
0.0466
0.0784
0.1097
0.1317
0.1382
0.1290
0.1084
0.0828
0.0579
0.0374
0.0225
0.0126
0.0066
0.0033
0.0015
0.0007
0.0003
0.0001
0.0000

8.5
0.0002
0.0017
0.0074
0.0208
0.0443
0.0752
0.1066
0.1294
0.1375
0.1299
0.1104
0.0853
0.0604
0.0395
0.0240
0.0136
0.0072
0.0036
0.0017
0.0008
0.0003
0.0001
0.0001

µ
8.6
0.0002
0.0016
0.0068
0.0195
0.0420
0.0722
0.1034
0.1271
0.1366
0.1306
0.1123
0.0878
0.0629
0.0416
0.0256
0.0147
0.0079
0.0040
0.0019
0.0009
0.0004
0.0002
0.0001

8.7
0.0002
0.0014
0.0063
0.0183
0.0398
0.0692
0.1003
0.1247
0.1356
0.1311
0.1140
0.0902
0.0654
0.0438
0.0272
0.0158
0.0086
0.0044
0.0021
0.0010
0.0004
0.0002
0.0001

8.8
0.0002
0.0013
0.0058
0.0171
0.0377
0.0663
0.0972
0.1222
0.1344
0.1315
0.1157
0.0925
0.0679
0.0459
0.0289
0.0169
0.0093
0.0048
0.0024
0.0011
0.0005
0.0002
0.0001

8.9
0.0001
0.0012
0.0054
0.0260
0.0357
0.0635
0.0941
0.1197
0.1332
0.1317
0.1172
0.0948
0.0703
0.0481
0.0306
0.0182
0.0101
0.0053
0.0026
0.0012
0.0005
0.0002
0.0001

9.0
0.0001
0.0011
0.0050
0.0150
0.0337
0.0607
0.0911
0.1171
0.1318
0.1318
0.1186
0.0970
0.0728
0.0504
0.0324
0.0194
0.0109
0.0058
0.0029
0.0014
0.0006
0.0003
0.0001

µ
x

9.1

9.2

9.3

9.4

9.5

9.6

9.7

9.8

9.9

10.0

0
1
2
3
4

0.0001
0.0010
0.0046
0.1040
0.0319

0.0001
0.0009
0.0043
0.0131
0.0302

0.0001
0.0009
0.0040
0.0123
0.0285

0.0001
0.0008
0.0037
0.0115
0.0269

0.0001
0.0007
0.0034
0.0107
0.0254

0.0001
0.0007
0.0031
0.0100
0.0240

0.0001
0.0006
0.0029
0.0093
0.0226

0.0001
0.0005
0.0027
0.0087
0.0213

0.0001
0.0005
0.0025
0.0081
0.0201

0.0000
0.0005
0.0023
0.0076
0.0189

163

Probability and Hypothesis
Testing

Appendix Table-2 Direct Values for Determining Poisson Probabilities (continued….)

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

0.0581
0.0881
0.1145
0.1302
0.1317
0.1198
0.0991
0.0752
0.0526
0.0342
0.0208
0.0118
0.0063
0.0032
0.0015
0.0007
0.0003
0.0001
0.0000
0.0000

0.0555
0.0851
0.1118
0.1286
0.1315
0.1210
0.1012
0.0776
0.0549
0.0361
0.0221
0.0127
0.0069
0.0035
0.0017
0.0008
0.0003
0.0001
0.0001
0.0000

0.0530
0.0822
0.1091
0.1269
0.1311
0.1219
0.1031
0.0799
0.0572
0.0380
0.0235
0.0137
0.0075
0.0039
0.0019
0.0009
0.0004
0.0002
0.0001
0.0000

0.0506
0.0793
0.1064
0.1251
0.1306
0.1228
0.1049
0.0822
0.0594
0.0399
0.0250
0.0147
0.0081
0.0042
0.0021
0.0010
0.0004
0.0002
0.0001
0.0000

0.0483
0.0764
0.1037
0.1232
0.1300
0.1235
0.1067
0.0844
0.0617
0.0419
0.0265
0.0157
0.0088
0.0046
0.0023
0.0011
0.0005
0.0002
0.0001
0.0000

0.0460
0.0736
0.1010
0.1212
0.1293
0.1241
0.1083
0.0866
0.0640
0.0439
0.0281
0.0168
0.0095
0.0051
0.0026
0.0012
0.0006
0.0002
0.0001
0.0000

0.0439
0.0709
0.0982
0.1191
0.1284
0.1245
0.1098
0.0888
0.0662
0.0459
0.0297
0.0180
0.0103
0.0055
0.0028
0.0014
0.0006
0.0003
0.0001
0.0000

0.0418
0.0682
0.0955
0.1170
0.1274
0.1249
0.1112
0.0908
0.0685
0.0479
0.0313
0.0192
0.0111
0.0060
0.0031
0.0015
0.0007
0.0003
0.0001
0.0001

0.0398
0.0656
0.0928
0.1148
0.1263
0.1250
0.1125
0.0928
0.0707
0.0500
0.0330
0.0204
0.0119
0.0065
0.0034
0.0017
0.0008
0.0004
0.0002
0.0001

0.0378
0.0631
0.0901
0.1126
0.1251
0.1251
0.1137
0.0948
0.0729
0.0521
0.0347
0.0217
0.0128
0.0071
0.0037
0.0019
0.0009
0.0004
0.0002
0.0001

µ

164

x

11

12

13

14

15

16

17

18

19

20

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
32
32
33
34
35
36
37
38
39

0.0000
0.0002
0.0010
0.0037
0.0102
0.0224
0.0411
0.0646
0.0888
0.1085
0.1194
0.1194
0.1094
0.0926
0.0728
0.0534
0.0367
0.0237
0.0145
0.0084
0.0046
0.0024
0.0012
0.0006
0.0003
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0001
0.0004
0.0018
0.0053
0.0127
0.0255
0.0437
0.0655
0.0874
0.1048
0.1144
0.1144
0.1056
0.0905
0.0724
0.0543
0.0383
0.0256
0.0161
0.0097
0.0055
0.0030
0.0016
0.0008
0.0004
0.0002
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0002
0.0008
0.0027
0.0070
0.0152
0.0281
0.0457
0.0661
0.0859
0.1015
0.1099
0.1099
0.1021
0.0885
0.0719
0.0550
0.0397
0.0272
0.0177
0.0109
0.0065
0.0037
0.0020
0.0010
0.0005
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0001
0.0004
0.0013
0.0037
0.0087
0.0174
0.0304
0.0473
0.0663
0.0844
0.0984
0.1060
0.1060
0.0989
0.0866
0.0713
0.0554
0.0409
0.0286
0.0191
0.0121
0.0074
0.0043
0.0024
0.0013
0.0007
0.0003
0.0002
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0002
0.0006
0.0019
0.0048
0.0104
0.0194
0.0324
0.0486
0.0663
0.0829
0.0956
0.1024
0.1024
0.0960
0.0847
0.0706
0.0557
0.0418
0.0299
0.0204
0.0133
0.0083
0.0050
0.0029
0.0016
0.0009
0.0004
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0001
0.0003
0.0010
0.0026
0.0060
0.0120
0.0213
0.0341
0.0496
0.0661
0.0814
0.0930
0.0992
0.0992
0.0934
0.0830
0.0699
0.0559
0.0426
0.0310
0.0216
0.0144
0.0092
0.0057
0.0034
0.0019
0.0011
0.0006
0.0003
0.0001
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0001
0.0005
0.0014
0.0034
0.0072
0.0135
0.0230
0.0355
0.0504
0.0658
0.0800
0.0906
0.0963
0.0963
0.0909
0.0814
0.0692
0.0560
0.0433
0.0320
0.0226
0.0154
0.0101
0.0063
0.0038
0.0023
0.0013
0.0007
0.0004
0.0002
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0001
0.0002
0.0007
0.0018
0.0042
0.0083
0.0150
0.0245
0.0368
0.0509
0.0655
0.0786
0.0884
0.0936
0.0936
0.0887
0.0798
0.0684
0.0560
0.0438
0.0328
0.0237
0.0164
0.0109
0.0070
0.0044
0.0026
0.0015
0.0009
0.0005
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0000
0.0001
0.0004
0.0010
0.0024
0.0050
0.0095
0.0164
0.0259
0.0378
0.0514
0.0650
0.0772
0.0863
0.0911
0.0911
0.0866
0.0783
0.0676
0.0559
0.0442
0.0336
0.0246
0.0173
0.0117
0.0077
0.0049
0.0030
0.0018
0.0010
0.0006
0.0003
0.0002
0.0001
0.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0000
0.0001
0.0002
0.0005
0.0013
0.0029
0.0058
0.0106
0.0176
0.0271
0.0387
0.0516
0.0646
0.0760
0.0844
0.0888
0.0888
0.0846
0.0769
0.0669
0.0557
0.0446
0.0343
0.0254
0.0181
0.0125
0.0083
0.0054
0.0034
0.0020
0.0012
0.0007
0.0004
0.0002
0.0001
0.0001

Appendix Table-3 Areas of a Standard Normal Probability Distribution Between the
Mean and Positive Values of z.

Chi-Square Test

0.4429 of area

Mean

z
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
3.2
3.3
3.4
3.5
3.6

z=1.58

.00

.01

.02

.03

.04

.05

.06

.07

.08

.09

.0000
.0398
.0793
.1179
.1554
.1915
.2257
.2580
.2881
.3159
.3413
.3643
.3849
.4032
.4192
.4332
.4452
.4554
.4641
.4713
.4772
:4821
.4861
.4893
.4918
.4938
.4953
.4965
.4974
.4981
.4987
.4990
.4993
.4995
.4997
.4998
.4998

.0040
.0438
.0832
.1217
.1591
.1950
.2291
.2611
.2910
.3186
.3438
.3665
.3869
.4049
.4207
.4345
.4463
.4564
.4649
.4719
.4778
.4826
.4864
.4896
.4920
.4940
.4955
.4966
.4975
.4982
.4987
.4991
.4993
.4995
.4997
.4998
.4998

.0080
.0478
.0871
.1255
.1628
.1985
.2324
.2642
.2939
.3212
.3461
.3686
.3888
.4066
.4222
.4357
.4474
.4573
.4656
.4726
.4783
:4830
.4868
.4898
.4922
.4941
.4956
.4967
.4976
.4982
.4987
.4991
.4994
.4995
.4997
.4998
.4998

.0120
.0517
.0910
.1293
.1664
.2019
.2357
.2673
.2967
.3238
.3485
.3708
.3907
.4082
.4236
.4370
.4484
.4582
.4664
.4732
.4788
.4834
.4871
.4901
.4925
.4943
.4957
.4968
.4977
.4983
.4988
.4991
.4994
.4996
.4997
.4998
.4999

.0160
.0557
.0948
.1331
.1700
.2054
.2389
.2704
.2995
.3264
.3508
.3729
.3925
.4099
.4251
.4382
.4495
.4591
.4671
.4738
.4793
.4838
.4875
.4904
.4927
.4945
.4959
.4969
.4977
.4984
.4988
.4992
.4994
.4996
.4997
.4998
.4999

.0199
.0596
.0987
.1368
.1736
.2088
.2422
.2734
.3023
.3289
.3531
.3749
.3944
.4115
.4265
.4394
.4505
.4599
.4678
.4744
.4798
.4842
.4878
.4906
.4929
.4946
.4960
.4970
.4978
.4984
.4989
.4992
.4994
.4996
.4997
.4998
.4999

.0239
.0636
.1026
.1406
.1772
.2123
.2454
.2764
.3051
.3315
.3554
.3770
.3962
.4131
.4279
.4406
.4515
.4608
.4686
.4750
.4803
.4846
.4881
.4909
.4931
.4948
.4961
.4971
.4979
.4985
.4989
.4992
.4994
.4996
.4997
.4998
.4999

.0279
.0675
.1064
.1443
.1808
.2157
.2486
.2794
.3078
.3340
.3577
.3790
.3980
.4147
.4292
.4418
.4525
.4616
.4693
.4756
.4808
.4850
.4884
.4911
.4932
.4949
.4962
.4972
.4979
.4985
.4989
.4992
.4995
.4996
.4997
.4998
.4999

.0319
.0714
.1103
.1480
.1844
.2190
.2517
.2823
.3106
.3365
.3599
.3810
.3997
.4162
.4306
.4429
.4535
.4625
.4699
.4761
.4812
.4854
.4887
.4913
.4934
.4951
.4963
.4973
.4980
.4986
.4990
.4993
.4995
.4996
.4997
.4998
.4999

.0359
.0753
.1141
.1517
.1879
.2224
.2549
.2852
.3133
.3389
.3621
.3830
.4015
.4177
.4319
.4441
.4545
.4633
.4706
.4767
.4817
.4857
.4890
.4916
.4936
.4952
.4964
.4974
.4981
.4986
.4990
.4993
.4995
.4997
.4998
.4998
.4999

165

Probability and Hypothesis
Testing

Appendix Table-4 Area in the Right Tail of a Chi-Square (χ
χ2) Distribution

Degrees
of
freedom
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Area in right tail
.99

.975

.95

.90

0.00016 0.00098 0.00393 0.0158
0.0201 0.0506 0.103 0.211
0.115 0.216 0.352 0.584
0.297 0.484 0.711 1.064
0.554 0.831 1.145 1.610
0.872 1.237 1.635 2.204
1.239 1.690 2.1675 2.833
1.646 2.180 2.733 3.490
2.088 2.700 3.325 4.168
2.558 3.247 3.940 4.865
3.053 3.816 4.575 5.578
3.571 4.404 5.226 6.304
4.107 5.009 5.892 7.042
4.660 5.629 6.571 7.790
5.229 6.262 7.261 8.547
5.812 6.908 7.962 9.312
6.408 7.564 8.672 10.085
7.015 8.231 9.390 10.865
7.633 8.907 10.117 11.651
8.260 9.591 10.851 12.443
8.897 10.283 11.591 13.240
9.542 10.982 12.338 14.041
10.196 11.6889 13.091 14.848
10.856 12.4015 13.848 15.658
11.524 13.120 14.611 16.473
12.198 13.844 15.379 17.292
12.879 14.573 16.151 18.114
13.565 15.308 16.928 18.939
14.256 16.047 17.708 19.768
14.953 16.791 18.493 20.599

.80
0.0642
0.446
1.005
1.649
2.343
3.070
3.822
4.594
5.380
6.179
6.989
7.807
8.634
9.467
10.307
11.152
12.002
12.857
13.716
14.578
15.445
16.314
17.187
18.062
18.940
19.820
20.703
21.588
22.475
23.364

.20

.10

.05

.025

.01

1.642
3.219
4.642
5.989
7.289
8..558
9.803
11.030
12.242
13.442
14.631
15.812
16.985
18.151
19.311
20.465
21.615
22.760
23.900
25.038
26.171
27.301
28.429
29.553
30.675
31.795
32.912
34.027
35.139
36.250

2.706
4.605
6.251
7.779
9.236
10.645
12.017
13.362
14.684
15.987
17.275
18.549
19.812
21.064
22.307
23.542
24.769
25.989
27.204
28.412
29.615
30.813
32.007
33.196
34.382
35.563
36.741
37.916
39.087
40.256

3.841
5.991
7.815
9.488
11.071
12.592
14.067
15.507
16.919
18.307
19.675
21.026
22.362
23.685
24.996
26.296
27.587
28.869
30.144
31.410
32.671
33.924
35.172
36.415
37.652
38.885
40.113
41.337
42.557
43.773

5.024
7.378
9.348
11.143
12.833
14.449
16.013
17.535
19.023
20.483
21.920
23.337
24.736
26.119
27.488
28.845
30.191
31.526
32.852
34.170
35.479
36.781
38.076
39.364
40.647
41.923
43.195
44.461
45.722
46.979

6.635
9.210
11.345
13.277
15.086
16.812
18.475
20.090
21.666
23.209
24.725
26.217
27.688
29.141
30.578
32.000
33.409
34.805
36.191
37.566
38.932
40.289
41.638
42.980
44.314
45.642
46.963
48.278
49.588
50.892

Source: From Table IV of Fisher and Yates, Statistical Tables for Biological,
Agricultural and Medical Research, Published by Longman Group Ltd
(previously published by Oliver and Boyd, Edinburg, 1963).

166

Chi-Square Test

Appendix : Table-5 Table of t

(One Tail Area)

α


0

Values of ta, m

Probability (Level of Significance)
d.f (v)

0.1

0.05

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
35
40
45
50
60
70
80
90
100
120
140
160
180
200


3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.318
1.316
1.315
1.314
1.313
1.311
1.310
1.3062
1.3031
1.3007
1.2987
1.2959
1.2938
1.2922
1.2910
1.2901
1.2887
1.2876
1.2869
1.2863
1.2858
1.282

6.3138
2.9200
2.3534
2.1318
2.0150
1.9432
1.8946
1.8595
1.8331
1.8125
1.7959
1.7823
1.7709
1.7613
1.7530
1.7459
1.7396
1.7341
1.7291
1.7247
1.7207
1.7171
1.7139
1.7109
1.7081
1.7056
1.7033
1.7011
1.6991
1.6973
1.6896
1.6839
1.6794
1.6759
1.6707
1.6669
1.6641
1.6620
1.6602
1.6577
1.6658
1.6545
1.6534
1.6525
1.645

0.025

0.01

0.005

12.706
4.3027
3.1825
2.7764
2.5706
2.4469
2.3646
2.3060
2.2622
2.2281
2.2010
2.1788
2.1604
2.1448
2.1315
2.1199
2.1098
2.1009
2.0930
2.0860
2.0796
2.0739
2.0687
2.0639
2.0595
2.0555
2.0518
2.0484
2.0452
2.0423
2.0301
2.0211
2.0141
2.0086
2.0003
1.994
1.9945
1.9901
1.9867
1.9840
1.9799
1.9771
1.9749
1.9733
1.96

31.821
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.492
2.485
2.479
2.473
2.467
2.462
2.457
2.438
2.423
2.412
2.403
2.390
2.381
2.374
2.364
2.364
2.358
2.353
2.350
2.347
2.345
2.326

63.657
9.9248
5.8409
4.6041
4.0321
3.7074
3.4995
3.3554
3.2498
3.1693
3.1058
3.0545
3.0123
2.9768
2.9467
2.9208
2.8982
2.8784
2.8609
2.8453
2.8314
2.8188
2.8073
2.7969
2.7874
2.7787
2.7707
2.7633
2.7564
2.7500
2.7239
2.7045
2.6896
2.6778
2.6603
2.6480
2.6388
2.6316
2.6260
2.6175
2.6114
2.6070
2.6035
2.6006
2.576

167

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close