Best examples of statistical power lab report examples for 2024–2025

If you’re trying to write about power analysis and keep Googling **examples of statistical power lab report examples**, you’re not alone. Power is one of those concepts everyone nods along to in stats class but then struggles to explain clearly in a lab report. The good news: once you see several real examples laid out, it gets much easier to write your own. This guide walks through the best examples of statistical power lab report examples across psychology, public health, biology, and data science. You’ll see how students and researchers justify sample size, report power calculations, and interpret underpowered or overpowered studies. Instead of vague theory, we’ll focus on realistic scenarios, typical wording, and the kind of numbers your instructor or PI actually expects to see. By the end, you’ll have a set of concrete templates and real examples you can adapt directly into your next statistics lab report.
Written by
Jamie
Published
Updated

Before talking formulas, it helps to see how power analysis actually shows up in writing. Below are several examples of statistical power lab report examples drawn from common undergraduate and early graduate projects. I’ll keep the math light and the wording practical, so you can lift the structure and adapt it to your own topic.


Example of a psychology memory experiment (paired t‑test)

In a typical cognitive psychology lab, students compare memory performance with vs. without a mnemonic strategy. Here’s how power might be written up.

Power and sample size
A priori power analysis was conducted in GPower 3.1 to determine the required sample size for a paired-samples t‑test. Based on a medium effect size (Cohen’s *d = 0.5), α = .05 (two-tailed), and desired power of 0.80, the analysis indicated a minimum of 34 participants. We recruited 40 participants to account for potential exclusions. The achieved power for the observed effect size (d = 0.62) was 0.88.

This is one of the best examples of statistical power lab report examples because it:

  • States the type of test (paired-samples t‑test)
  • Names the software (G*Power) and version
  • Reports assumed effect size, alpha, and target power
  • Shows planned vs. actual sample size and achieved power

You can swap in your own test (independent t‑test, ANOVA, regression) and your own effect size assumptions.


Examples include public health vaccine uptake (proportion comparison)

Public health labs often compare two proportions, such as vaccination uptake in an intervention vs. control group. Here’s an example of how power is written in that context.

Power analysis
We planned a two-group comparison of vaccination uptake (intervention vs. usual care). An a priori power analysis for a chi-square test of independence was conducted using an anticipated increase from 55% to 70% uptake (risk difference = 0.15), α = .05, and power = 0.80. The analysis indicated that 150 participants per group were required. We enrolled 312 participants (157 intervention, 155 control), providing slightly higher power (0.83) for detecting the targeted effect.

If your topic touches vaccines or screening programs, you can cross-check realistic effect sizes and baseline rates using sources like the CDC or NIH. Those external data help justify why your assumed effect size is not just made up.


Biology lab: enzyme activity across temperatures (one-way ANOVA)

Biology courses frequently run enzyme or growth-rate experiments with 3–5 conditions. Here’s how power might be reported for a one-way ANOVA.

Sample size justification and power
We compared mean enzyme activity across four temperature conditions (20°F, 40°F, 60°F, 80°F) using a one-way ANOVA. An a priori power analysis in G*Power for a fixed-effects ANOVA with four groups, α = .05, and desired power = 0.80 indicated that 64 total observations were needed to detect a medium effect (f = 0.25). We collected 72 observations (18 per temperature), which yielded an achieved power of 0.84 for the observed effect size (f = 0.28).

This kind of paragraph is a clean example of how to connect design (four groups) to the power calculation and then to the actual data you collected.


Data science / regression: predicting blood pressure

In applied statistics or data science labs, you might run multiple linear regression and need to justify how many cases you used.

Power for multiple regression
We modeled systolic blood pressure as a function of age, BMI, smoking status, and physical activity (four predictors). Following common guidelines for observational research and using an a priori power analysis for multiple regression (fixed model, R² deviation from zero) with a medium effect size (f² = 0.15), α = .05, and power = 0.80, the recommended minimum sample size was 85 participants. Our dataset included 120 participants, which G*Power estimated as providing power of 0.91 to detect the targeted effect size.

If you want more background on regression power and effect sizes, the UCLA Statistical Consulting Group has helpful tutorials that you can cite in your methods section.


Underpowered study example: small sample, big problem

Not every lab has enough participants. One of the most realistic examples of statistical power lab report examples is when you admit that your study was underpowered.

Post hoc power and limitations
Due to time constraints, only 16 participants completed both conditions of the reaction-time task. A post hoc power analysis for a paired-samples t‑test with the observed effect size (d = 0.45) and α = .05 indicated power of 0.47. This low power increases the probability of a Type II error and limits the strength of our conclusions. Non-significant results should therefore be interpreted with caution.

This kind of honesty is not a weakness; it shows you understand what power means and how it affects interpretation.


Overpowered study example: huge dataset, tiny effects

With access to large online datasets (think 10,000+ observations), you can easily end up with extremely high power. That creates a different problem: trivial effects that still reach significance.

Power in large samples
The dataset included 9,842 adults. A sensitivity analysis using GPower indicated that, for a two-tailed independent t‑test with α = .05, our sample size provided power > 0.99 to detect even very small effects (Cohen’s *d ≈ 0.06). As a result, statistically significant differences in mean stress scores between men and women may reflect very small, practically unimportant effects. We therefore focus on effect sizes and confidence intervals when interpreting group differences.

This is one of the best examples of statistical power lab report examples for big-data projects: it shows that “more power” is not always better if you ignore practical significance.


2024–2025 trend: pre-registration and power transparency

Recent years have put more pressure on researchers to justify sample sizes and power, not just mention them in passing. Journals and instructors increasingly expect:

  • Pre-registered power analyses (for example, via the Open Science Framework)
  • Clear reporting of whether power was calculated before data collection (a priori) or after (post hoc)
  • Discussion of how power affects interpretation, especially for non-significant results

If you want to reference current best practices in your lab report, you can point to guidelines like the NIH’s policies on rigor and reproducibility or open-science tutorials from major universities such as Harvard’s statistics resources.

When you write, make that trend explicit. For example:

Consistent with current recommendations for transparent reporting (2020–2024), we conducted an a priori power analysis and pre-registered our planned sample size and analysis decisions prior to data collection.

That kind of sentence immediately elevates your report above a generic class assignment.


How to structure your own statistical power section

Looking across these examples of statistical power lab report examples, most of them share the same backbone. In your own report, aim to cover:

1. Type of test and design
Mention whether you used a t‑test, ANOVA, chi-square, regression, or something else, and describe the design (two groups, repeated measures, number of predictors, etc.).

2. Power analysis timing
State whether the analysis was a priori (before data collection), post hoc (after), or a sensitivity analysis (what effect size you can detect, given the sample you already have).

3. Parameters used
Include:

  • Effect size metric and value (Cohen’s d, f, f², odds ratio, difference in proportions)
  • Alpha level (usually .05)
  • Desired power (often 0.80, sometimes 0.90)
  • Software used (G*Power, R pwr package, etc.)

4. Sample size and achieved power
Report the recommended sample size and what you actually collected. If you ran a post hoc analysis, give the achieved power for the observed effect size.

5. Interpretation and limitations
Explain what the power level means for your conclusions. If power is low, say that non-significant results might be false negatives. If power is extremely high, emphasize effect sizes over p-values.

Here’s a template you can adapt:

An a priori power analysis for a [TEST TYPE] with [NUMBER OF GROUPS/PREDICTORS], α = .05, and desired power = 0.80, assuming a [SMALL/MEDIUM/LARGE] effect size ([EFFECT SIZE METRIC] = [VALUE]), indicated that [N] observations were required. We collected [ACTUAL N], which yielded an achieved power of [POWER VALUE] for the observed effect size.

Use this as a skeleton, then borrow phrasing from the best examples of statistical power lab report examples above.


More real examples: from different disciplines

To round things out, here are a few more real examples you might encounter in 2024–2025 courses.

Nursing research: pain scores before and after an intervention

Power and sample size
For a within-subjects design comparing pain scores before and after a relaxation intervention, an a priori power analysis for a paired-samples t‑test (α = .05, power = 0.80, medium effect size d = 0.5) recommended a minimum of 34 participants. We enrolled 38 patients from the outpatient clinic, which G*Power estimated as providing power of 0.82 to detect the targeted effect.

You can support your assumed effect size by citing systematic reviews from sites like Mayo Clinic or WebMD that report typical pain reductions for similar interventions.

Education research: test scores under two teaching methods

Power analysis
We compared math test scores for students taught using a traditional lecture vs. an active-learning approach. An a priori power analysis for an independent-samples t‑test, assuming a moderate effect size (Cohen’s d = 0.6) based on prior meta-analyses of active learning, α = .05, and power = 0.80, indicated that 45 students per group were needed. The final sample included 92 students (46 per group), resulting in an achieved power of 0.81.

Again, this is a clear example of how to tie your effect size to published research instead of guessing.


FAQ: examples of power reporting questions students actually ask

How long should the power section be in a stats lab report?

For most undergraduate reports, two to four sentences are enough if they cover test type, effect size, alpha, target power, and sample size. The longer, more detailed examples of statistical power lab report examples you see here are meant as a menu; pick the pieces that match your assignment’s expectations.

Do I always need an a priori power analysis?

In a perfect world, yes, but in class projects you often inherit the sample size from your instructor or dataset. In that case, a sensitivity or post hoc analysis is fine. Just be transparent about what you did and avoid overstating what the power means.

What is a good example of reporting underpowered results?

A solid example of underpowered reporting looks like this:

With only 12 participants per group, post hoc power for detecting a medium effect (d = 0.5) was approximately 0.40. Therefore, the non-significant difference between groups may reflect insufficient statistical power rather than the absence of a true effect.

This kind of statement aligns well with the other examples of statistical power lab report examples in this guide.

Where can I learn more about power and sample size?

For deeper background and formulas, check:

Use those as references in your lab report when you justify your choices.


If you model your writing on these best examples of statistical power lab report examples, you’ll hit the expectations of most 2024–2025 statistics, psychology, biology, and data science courses: clear power logic, transparent assumptions, and honest interpretation of what your sample size can and cannot tell you.

Explore More Statistics Lab Report Templates

Discover more examples and insights in this category.

View All Statistics Lab Report Templates