Post-Hoc Power Analysis Examples

Explore practical examples of post-hoc power analysis in various research contexts.
By Jamie

Understanding Post-Hoc Power Analysis

Post-hoc power analysis is a method used to determine the power of a statistical test after the data has been collected and analyzed. It can provide insights into whether a study had sufficient power to detect an effect, based on the observed data. This type of analysis is particularly useful for interpreting the results of studies that did not find significant effects. Below are three diverse, practical examples of post-hoc power analysis.

Example 1: Clinical Trial Outcomes

In a clinical trial evaluating the effectiveness of a new drug for hypertension, researchers conducted a study with 50 participants. The trial aimed to determine whether the drug significantly reduced blood pressure compared to a placebo. After analyzing the data, the researchers found no significant difference between the two groups. To understand the power of their analysis, they conducted a post-hoc power analysis.

The researchers calculated the effect size, which was small (Cohen’s d = 0.2), and found that the power of the test was only 0.25, indicating a 25% chance of detecting an effect if there was one. This low power suggests that the sample size was too small to detect a clinically relevant effect, which leads to the conclusion that the trial might have been underpowered. As a result, the researchers recommended conducting a follow-up study with a larger sample size to better assess the drug’s efficacy.

Notes

  • Effect Size Calculation: Small effect sizes can often require larger sample sizes to achieve adequate power.
  • Sample Size Recommendations: Future studies could consider a sample size of at least 100 participants to improve power.

Example 2: Educational Intervention Study

An education researcher conducted a study to evaluate the effectiveness of a new teaching method on student performance in mathematics. The study included 30 students in the experimental group and 30 in the control group. After the intervention, the researcher reported no significant difference in test scores. To investigate the power of the study, a post-hoc power analysis was performed.

Using the observed means and standard deviations, the researcher found an effect size of Cohen’s d = 0.3. The resulting power of the analysis was calculated to be 0.40, indicating that there was only a 40% chance of detecting a meaningful difference. This analysis highlighted that the sample size was likely insufficient to detect the desired educational impact.

Notes

  • Recommendations for Future Studies: Increasing the sample size to at least 60 students per group may enhance the chances of detecting a significant effect.
  • Consideration of Variability: High variability in test scores may necessitate larger sample sizes.

Example 3: Marketing Campaign Effectiveness

A marketing analyst evaluated the effectiveness of a new advertising campaign by comparing sales data from two different regions. The analysis included 40 stores in the campaign region and 40 in the control region. The results showed no significant increase in sales for the campaign stores. To assess the power of the analysis, a post-hoc power analysis was conducted.

The analyst calculated an effect size of Cohen’s d = 0.1, leading to a power of only 0.15. This low power indicated a very low probability of detecting an effect, suggesting that the campaign may not have been adequately tested. The analyst recommended further research with a larger number of stores and possibly extending the campaign duration to gather more data.

Notes

  • Implications for Marketing Strategies: Understanding power can help marketers make informed decisions about the effectiveness of campaigns.
  • Sample Size Considerations: Larger sample sizes may be required to account for variations in consumer behavior across different regions.