At the end of this section you should be able to answer the following questions:
- What is the main idea underpinning statistical significance?
- Can we interpret a non-significant result as “no difference between means” or “no relationship between variables?”
Understanding statistical significance is important, and sometimes using analogies from other disciplines can help you better understand these ideas.
To illustrate statistical significance using non-statistical jargon, think of a courtroom where people are tried for alleged crimes. In courtrooms using legal systems based on English Common Law, there are generally only two outcomes permitted from the jury after evidence is presented. These outcomes are “GUILTY” or “NOT GUILTY.” This outcome is really just a decision, but that decision also must correspond with the true state of reality to be correct.
The decision the jurors have to make are based on the following logic. If the evidence is inconsistent with the assumption of innocence the verdict “GUILTY” can be announced. In contrast, if the evidence is not inconsistent with the assumption of innocence the verdict of “NOT GUILTY” is announced, but the term “INNOCENT” is never used. It is impossible to truly prove innocence.
Now keep this analogy in your mind, while we go back to understanding statistical significance testing.
To assume that the statistical distribution is the same in the two populations of interest, means the null hypothesis is true, which is analogous to assuming that the defendant is innocent in law.
If the evidence from the data of the study is inconsistent with the null hypothesis we can “fail to accept the null hypothesis” and state that the difference is “statistically significant” and conclude from a significant result that the null hypothesis is highly unlikely. In contrast, a non-significant result only tells us that the effect is not big enough to be anything other than a chance finding. It does not tell us that the effect is zero.
Therefore, we never interpret a non-significant result as “no difference between means” or “no relationship between variables.” It could mean that the tests run were just unable to detect the association or difference. Accordingly, non-significant results shouldn’t be interpreted as NO EFFECT. Non-significant results could be due to many things including a small effect or low statistical power of the experiment.
Additionally, significant results are often interpreted (or over-interpreted) as important results and may be equated or confused with large effect.