Post-hoc Tests After ANOVA: Practical Examples

Explore diverse examples of post-hoc tests after ANOVA to understand their application in statistical analysis.
By Jamie

Understanding Post-hoc Tests After ANOVA

When conducting an ANOVA (Analysis of Variance), researchers often find significant differences between group means. However, ANOVA does not specify which groups differ from each other. This is where post-hoc tests come in, allowing for pairwise comparisons to identify specific differences. Here, we present three practical examples of post-hoc tests that demonstrate their application in real-world scenarios.

Example 1: Comparing Plant Growth Under Different Light Conditions

Context

A botanist wants to study the effects of three different types of light (natural, fluorescent, and LED) on the growth of a specific plant species. After conducting an ANOVA, the researcher finds significant differences in average growth among the groups. To determine which light type promotes the most growth, a post-hoc test is necessary.

Example

The ANOVA results indicate a significant difference (p < 0.05) among the three light conditions. The botanist decides to use Tukey’s HSD (Honestly Significant Difference) post-hoc test to compare the means:

  • Natural Light: 20 cm
  • Fluorescent Light: 15 cm
  • LED Light: 18 cm

The Tukey test reveals:

  • Natural vs. Fluorescent: Significant difference (p = 0.01)
  • Natural vs. LED: No significant difference (p = 0.08)
  • Fluorescent vs. LED: Significant difference (p = 0.04)

Notes

Tukey’s HSD is particularly useful when the sample sizes are equal and the number of comparisons is moderate. It controls the family-wise error rate, which is critical in maintaining the integrity of the results.

Example 2: Evaluating Student Performance Across Teaching Methods

Context

An educator wants to assess the effectiveness of three teaching methods (traditional lecture, online learning, and hybrid) on student performance in mathematics. After performing an ANOVA, the educator finds significant differences in test scores among the groups.

Example

The ANOVA yields a significant p-value (p < 0.05). To explore differences, the educator uses the Bonferroni post-hoc test:

  • Traditional Lecture: Mean score 78
  • Online Learning: Mean score 85
  • Hybrid: Mean score 82

Bonferroni test results show:

  • Traditional vs. Online: Significant difference (p = 0.02)
  • Traditional vs. Hybrid: No significant difference (p = 0.10)
  • Online vs. Hybrid: Significant difference (p = 0.03)

Notes

The Bonferroni correction is a conservative method that is effective when conducting multiple comparisons. It’s important to note that while it reduces the chance of Type I errors, it can increase the risk of Type II errors (failing to identify a true effect).

Example 3: Analyzing Customer Satisfaction Across Service Channels

Context

A retail manager is interested in understanding customer satisfaction levels across three service channels: in-store, online, and phone support. After conducting an ANOVA, the manager finds significant differences in satisfaction ratings.

Example

The ANOVA results show a significant effect (p < 0.05). To pinpoint the differences, the manager employs the Scheffé test:

  • In-store: Mean satisfaction 4.5/5
  • Online: Mean satisfaction 3.8/5
  • Phone: Mean satisfaction 4.0/5

Scheffé test results indicate:

  • In-store vs. Online: Significant difference (p = 0.01)
  • In-store vs. Phone: Significant difference (p = 0.04)
  • Online vs. Phone: No significant difference (p = 0.15)

Notes

The Scheffé test is versatile and can be used with unequal sample sizes. It is particularly useful for complex comparisons and provides a broad range of insights into significant differences. However, it is less powerful than some other tests when sample sizes are equal.

In summary, post-hoc tests are essential for understanding the specific differences among group means following an ANOVA. Each of the examples above showcases different contexts and post-hoc tests, highlighting their importance in statistical analysis.