A T-test is performed to compare the averages of two groups of data. How do you know that the difference between the averages you observed is "real"? It could be that you just happened to see a difference because your sample size is small and by chance you ended up with larger numbers in one group and smaller numbers in the other. The T-test allows us to say with 95% certainty whether they are truly different. Using Microsoft Excel, the following function returns the probability associated with a Student's t-test:


Where The number that you get from the function is called the P-value, which ranges from 0 to 1 (if you get a number outside these limits, then you have not set up the function properly.) P represents the probablility that the two means are the same. If P is high, then there is probably no difference between the two groups; if P is low, then there may be a difference. By convention, we us 5% as a cut-off, so if P is less than 0.05, we say that the difference between the means is statistically significant because the chance that the averages are the same is less than 5%. We can live with that level of uncertainty.


Suppose we do an experiment wherein we measure respiration rate in 17 fish at rest, then the same fish during exercise. We get the following data:
1 Respiration (/min)
2 Rest Exercise
3 78 84
4 120 92
5 88 116
6 39 100
7 80 84
8 56 104
9 52 104
10 72 120
11 96 96
12 88 162
13 90 108
14 96 156
15 80 104
16 102 138
17 100 140
18 86 122
19 84 110
Our experiment and data are set up as follows: Our formula is completed as follows:


The result is 6.82779E-05, or 6.82779 x 10-5, which translates as 0.0000683. This P-value means that we would expect to get these results by random chance only about 0.007% of the time. The difference between the means is highly significant. We can say that the two means are significantly different. This is a term of art--we must never say this unless a statistical test has actually shown it to be so.

[ 8/23/01 jrc]