Real-world examples of effect size in inferential statistics

When people first meet effect size, they usually see a formula and a Greek letter, then quietly panic. The fastest way past that? Look at real examples of effect size examples in inferential statistics and see how they show *how big* a finding really is, not just whether p < 0.05. In practice, researchers, analysts, and policy makers care less about “Is there any difference?” and more about “Is the difference big enough to matter?” This guide walks through practical examples of effect size examples in inferential statistics across psychology, medicine, education, and business. You’ll see how Cohen’s d, odds ratios, correlation coefficients, and standardized mean differences show up in published studies, policy debates, and everyday analytics at work. Along the way, we’ll compare statistically significant but tiny effects with moderate or large ones that actually justify decisions, budgets, and behavior change.
Written by
Jamie
Published
Updated

Starting with real examples of effect size in action

Instead of starting with formulas, let’s start with stories. Here are several real-world examples of effect size examples in inferential statistics that show why p-values alone are not enough.

In a large randomized trial, a new blood pressure drug might reduce systolic blood pressure by 2 mmHg on average compared to placebo. With tens of thousands of participants, that 2 mmHg difference can be statistically significant. But the effect size — a standardized mean difference such as Cohen’s d around 0.10 — is tiny. Clinically, that may not justify side effects, cost, or policy change.

Contrast that with an education study where a reading intervention improves test scores by 0.6 standard deviations compared to a control group. Here, the effect size is medium-to-large. Even with a modest sample, this standardized effect suggests a meaningful learning gain that might justify training teachers and scaling the program.

Those two cases are both examples of effect size examples in inferential statistics: same basic inferential machinery, very different practical meaning.


Health and medicine: examples of effect size that guide treatment

Medical research is full of effect size measures: risk ratios, odds ratios, hazard ratios, and standardized mean differences. A few concrete examples include:

1. Vaccine effectiveness as an effect size

Public health agencies routinely report vaccine effectiveness as a percentage reduction in risk. For instance, a vaccine that cuts the risk of symptomatic infection from 10% to 2% has an effect size that can be expressed as a risk ratio of 0.20 (2% / 10%) or a relative risk reduction of 80%.

From an inferential statistics point of view, a confidence interval around that risk ratio tells us about uncertainty, but the effect size itself tells us how much protection people actually get. During COVID-19, the Centers for Disease Control and Prevention (CDC) regularly reported these kinds of effect size estimates, showing how protection varied by variant, booster status, and age group (CDC vaccine effectiveness).

This is one of the best examples of effect size examples in inferential statistics because the number directly drives decisions: whether to recommend boosters, how to prioritize high‑risk groups, and how to communicate benefits to the public.

2. Drug trials: standardized mean differences

In clinical trials, researchers often compare a new drug to a placebo on a continuous outcome like depression scores or blood pressure. Suppose a trial finds:

  • Drug group mean depression score: 12 (SD = 6)
  • Placebo group mean depression score: 16 (SD = 6)

Cohen’s d = (16 − 12) / 6 = 0.67. That’s a moderate-to-large effect size.

The p-value might be tiny because the sample is large, but the effect size tells clinicians that patients on the drug improve by about two-thirds of a standard deviation. That helps organizations like the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) judge whether the benefit justifies cost and side effects (NIH clinical trials basics).

3. Survival analysis: hazard ratios as effect size

In oncology trials, the effect of a new cancer therapy is often summarized with a hazard ratio (HR). If a new treatment has HR = 0.70 for death compared with standard care, that means a 30% reduction in the instantaneous risk of death over the follow-up period.

Even if the p-value is borderline, an HR around 0.70 may still be clinically meaningful, especially for severe diseases. This is another example of effect size examples in inferential statistics where the magnitude of the hazard ratio is what matters for patients and policy makers, not just whether the null hypothesis is rejected.


Psychology and social science: examples of effect size beyond p-values

Psychology is one of the fields that pushed hard for routine reporting of effect sizes, especially after debates about replicability.

4. Therapy vs. waitlist: Cohen’s d in practice

Imagine a study comparing cognitive behavioral therapy (CBT) to a waitlist control for anxiety. Suppose the mean anxiety score in the CBT group is 10 (SD = 5), while the control group averages 15 (SD = 5).

Cohen’s d = (15 − 10) / 5 = 1.0, a large effect size. Even if the sample is only 40 people, that d = 1.0 tells us the difference is not just statistically detectable but also practically big. Clients are likely to feel this change in their daily lives.

Psychology journals now typically require these kinds of effect size examples in inferential statistics, along with confidence intervals, because they show the strength of an intervention rather than just the existence of an effect.

5. Correlation effect sizes in social research

Correlations are another common effect size. Consider a study on screen time and sleep quality in teenagers. A correlation of r = −0.15 between daily hours of screen time and sleep duration might be statistically significant with thousands of participants, but it’s a small effect.

Compare that with an r = 0.50 between parental education and children’s vocabulary scores in early childhood. That’s a large effect in social science terms. Both correlations are examples of effect size examples in inferential statistics, but they tell very different stories about the strength of the relationships.

The American Psychological Association (APA) has long encouraged reporting correlations and standardized effect sizes as part of good statistical practice (APA style and reporting standards).


Education and policy: examples of effect size that move budgets

Education research often uses standardized test scores, making it a natural home for effect size measures like Cohen’s d and Hedges’ g.

6. Class size reduction: policy-relevant effect sizes

Suppose a state runs a large experiment on reducing class sizes from 28 to 18 students in early grades. After two years, students in smaller classes score 0.25 standard deviations higher in reading and math.

That 0.25 effect size is modest but meaningful at scale. When multiplied across hundreds of thousands of students, it can justify substantial investment. Policymakers care about this standardized effect because it allows them to compare class size reduction with other interventions, like tutoring or curriculum changes.

Meta-analyses in education often line up these standardized differences side by side, offering some of the best examples of effect size examples in inferential statistics for comparing very different programs.

7. Tutoring programs: comparing interventions

Imagine two after-school tutoring programs evaluated in different districts:

  • Program A: effect size d = 0.15 on math scores
  • Program B: effect size d = 0.45 on math scores

Even if both are statistically significant, Program B produces triple the standardized gain. For a school district with limited funds, that effect size comparison is more informative than p-values alone.

Researchers and policy analysts use these examples of effect size examples in inferential statistics to build cost-effectiveness comparisons: dollars per 0.10 increase in standardized test scores, for instance.


Business and tech: examples of effect size in A/B testing

In tech and business analytics, effect size shows up in A/B testing, marketing experiments, and product optimization.

8. Conversion rate differences: absolute and relative effect sizes

Suppose an e‑commerce site runs an A/B test:

  • Control page conversion: 5.0%
  • Variant page conversion: 5.4%

With millions of visitors, that 0.4 percentage point increase is statistically significant. But is the effect size meaningful?

One way to express the effect size is as a relative lift: 0.4 / 5.0 = 8% increase in conversions. Another is to standardize the difference using a pooled standard deviation of the underlying binary outcome, giving something like Cohen’s h. Either way, the magnitude of the effect tells product managers whether it’s worth rolling out the new design.

Here, the effect size is small but may still be profitable, depending on margins. This is a good business-focused example of effect size examples in inferential statistics: a tiny p-value does not automatically mean a change is worth implementing.

9. Customer retention and odds ratios

A subscription service tests a new onboarding flow to reduce early churn. In the control group, 20% of users cancel in the first month; in the treatment group, only 15% cancel.

The odds of churn are 0.25 in the control group (20/80) and 0.176 in the treatment group (15/85), giving an odds ratio of about 0.70. That odds ratio is the effect size: a 30% reduction in the odds of early churn.

Even if the confidence interval is wide, that effect size is more informative for decision makers than a bare statement that “the difference is statistically significant at the 5% level.” It’s another example of effect size examples in inferential statistics where the magnitude of the odds ratio drives business strategy.


How to read effect size examples in inferential statistics

Once you start seeing effect sizes everywhere, the next step is learning how to read them. A few practical habits help:

Look at both significance and size

A p-value tells you how surprising the data are under the null hypothesis, given the sample size. Effect size tells you how big the difference or association is in standardized or interpretable units.

You want both. A small p-value with a tiny effect size (like Cohen’s d = 0.10) is common in huge datasets, but the practical impact might be negligible. Conversely, a moderate effect size with a wide confidence interval in a small study might be promising but uncertain.

Compare across studies using standardized metrics

Standardized effect sizes (Cohen’s d, Hedges’ g, r, odds ratios) allow comparisons across:

  • Different measures (test scores vs. clinical scales)
  • Different populations (children vs. adults)
  • Different study designs (clinical trials vs. observational studies)

Meta-analyses exploit this by pooling standardized effects. When you read one, you’re looking at aggregated examples of effect size examples in inferential statistics across many individual studies.

Consider context, not just benchmarks

Textbooks often give rough benchmarks like “d = 0.2 small, 0.5 medium, 0.8 large.” These are helpful starting points but not universal laws. In public health, a small effect size on a risk factor can matter enormously when applied to millions of people. In a clinical setting, a moderate effect on survival time may be life-changing.

Always ask: in this field, for this outcome, is this effect size big enough to matter in real life?


FAQ: common questions about effect size examples

Q: Why do we need effect sizes if we already have p-values?
P-values answer “Is there evidence against the null hypothesis?” Effect sizes answer “How big is the effect?” You can have a tiny effect that is statistically significant in a large sample, or a practically meaningful effect that is not significant in a small study. Real examples of effect size examples in inferential statistics show that decisions about treatments, policies, or product changes should be based on both significance and effect size.

Q: What is an example of a large effect size in psychology?
A therapy study where anxiety scores drop by one full standard deviation (Cohen’s d ≈ 1.0) compared with a control group is an example of a large effect size. Participants in the treatment group would, on average, be better off than about 84% of people in the control group.

Q: Are odds ratios and risk ratios effect sizes?
Yes. In studies with binary outcomes (event vs. no event), odds ratios, risk ratios, and risk differences are all effect size measures. They quantify how much more or less likely an outcome is in one group compared with another.

Q: What are good examples of effect size examples in inferential statistics for beginners?
Beginner-friendly examples include comparing average test scores between two classes (Cohen’s d), reporting a correlation between hours studied and exam grades (r), or showing the relative risk reduction from wearing seat belts in car crashes. These examples of effect size examples in inferential statistics connect the math to everyday decisions.

Q: Where can I learn more about using effect sizes properly?
Authoritative sources include statistics textbooks, APA reporting standards, and guidance from major research organizations. For health and clinical examples, the NIH and CDC provide accessible explanations and applied data; for social science and psychology, APA and university statistics courses often offer detailed reporting guidelines.


Effect sizes are the part of inferential statistics that answer the question everyone actually cares about: “So what?” Once you start looking for them, you’ll see examples of effect size examples in inferential statistics in journal articles, policy reports, business dashboards, and even news stories about medical breakthroughs. The trick is to read past the p-value and ask how big the effect really is — and whether that size justifies action.

Explore More Inferential Statistics Examples

Discover more examples and insights in this category.

View All Inferential Statistics Examples