Best examples of z-test for proportions explained with examples
Starting with real examples of z-test for proportions
Before we talk formulas, let’s start where people actually use this test. Some of the best examples of z-test for proportions explained with examples come from everyday analytics problems:
- A hospital comparing the proportion of patients with side effects under two medications.
- A marketing team testing whether a new email subject line increases click-through rate.
- A public health team checking if a vaccination campaign raised the share of vaccinated adults.
- A product manager deciding whether Version B of a landing page converts more visitors than Version A.
All of these are proportion questions: success vs failure, yes vs no, converted vs not converted. When sample sizes are large enough, the z-test for proportions is the standard workhorse.
Single proportion z-test: examples explained step by step
Let’s start with the one-sample z-test for a proportion. You’re comparing a sample proportion to a claimed or historical proportion.
Example 1: Vaccine uptake vs public health target
Suppose a county health department has a goal: at least 70% of adults should receive the seasonal flu vaccine. According to CDC data, recent U.S. adult flu vaccination coverage has hovered below that level in many groups.
You survey 400 adults in your county and find 252 report getting the flu shot.
- Claimed/target proportion (null hypothesis): \( p_0 = 0.70 \)
- Sample size: \( n = 400 \)
- Sample proportion: \( \hat p = 252/400 = 0.63 \)
You want to test whether the actual vaccination rate is below the 70% target.
Step 1 – Hypotheses
- \(H_0: p = 0.70\) (the county meets the target)
- \(H_a: p < 0.70\) (the county is below target)
Step 2 – Check conditions
Use the normal approximation rule of thumb:
- \(n p_0 = 400 \times 0.70 = 280 > 10\)
- \(n (1 - p_0) = 400 \times 0.30 = 120 > 10\)
So a z-test for proportions is appropriate.
Step 3 – Compute the z-statistic
The standard error under \(H_0\) is:
[
SE = \sqrt{\frac{p_0(1-p_0)}{n}} = \sqrt{\frac{0.70 \times 0.30}{400}} \approx 0.0229
]
The z-statistic is:
[
z = \frac{\hat p - p_0}{SE} = \frac{0.63 - 0.70}{0.0229} \approx -3.06
]
Step 4 – Interpret
A z of about −3.06 gives a very small one-sided p-value (around 0.0011). At a 5% significance level, you would reject \(H_0\) and conclude the county’s vaccination rate is statistically lower than the 70% target.
This is one of the cleanest examples of z-test for proportions explained with examples: a public health target vs observed data, with a clear policy implication.
Example 2: Defect rate in a factory vs contract limit
A manufacturer promises that no more than 2% of shipped units are defective. A client inspects 2,500 units and finds 70 defective.
- \(p_0 = 0.02\)
- \(n = 2{,}500\)
- \(\hat p = 70/2{,}500 = 0.028\)
The client wants to know: Is the defect rate higher than 2%?
Hypotheses:
- \(H_0: p = 0.02\)
- \(H_a: p > 0.02\)
Standard error:
[
SE = \sqrt{\frac{0.02 \times 0.98}{2{,}500}} \approx 0.0028
]
z-statistic:
[
z = \frac{0.028 - 0.02}{0.0028} \approx 2.86
]
The p-value (one-sided) is about 0.002. The client has strong evidence the true defect rate exceeds 2%, and can push back on the supplier.
Again, this fits nicely among the best examples of z-test for proportions explained with examples: contractual limits, big sample, yes/no outcome.
Two-proportion z-test: real examples that mirror A/B testing
Most people meet this test in the wild through A/B experiments. Here, we compare two independent proportions.
Example 3: Email A/B test – did the new subject line win?
A marketing team wants to see if a new subject line improves click-through rate (CTR). They run an A/B test:
- Version A: 10,000 emails, 520 clicks → \(\hat p_A = 0.052\)
- Version B: 9,800 emails, 620 clicks → \(\hat p_B \approx 0.0633\)
They want to test if B has a higher CTR than A.
Step 1 – Hypotheses
- \(H_0: p_A = p_B\)
- \(H_a: p_B > p_A\)
Step 2 – Pooled proportion
Under \(H_0\), we pool successes and trials:
[
\hat p_{pool} = \frac{520 + 620}{10{,}000 + 9{,}800} = \frac{1{,}140}{19{,}800} \approx 0.0576
]
Step 3 – Standard error for difference in proportions
[
SE = \sqrt{\hat p_{pool}(1-\hat p_{pool})\left(\frac{1}{n_A} + \frac{1}{n_B}\right)}
]
[
SE \approx \sqrt{0.0576 \times 0.9424 \times (1/10{,}000 + 1/9{,}800)} \approx 0.0032
]
Step 4 – z-statistic
[
z = \frac{\hat p_B - \hat p_A}{SE} = \frac{0.0633 - 0.052}{0.0032} \approx 3.53
]
The one-sided p-value is under 0.001. There’s strong evidence that Version B’s CTR is higher. In the world of growth and experimentation, this is one of the most realistic examples of z-test for proportions explained with examples.
Example 4: Website conversion rate after a redesign
A product team redesigns a checkout page and wants to know whether the purchase rate changed.
- Old page: 5,000 visitors, 575 purchases → \(\hat p_{old} = 0.115\)
- New page: 4,800 visitors, 552 purchases → \(\hat p_{new} = 0.115\)
The proportions are identical to three decimal places. But does that mean there’s no difference?
Hypotheses (two-sided this time):
- \(H_0: p_{old} = p_{new}\)
- \(H_a: p_{old} \ne p_{new}\)
Pooled proportion:
[
\hat p_{pool} = \frac{575 + 552}{5{,}000 + 4{,}800} = \frac{1{,}127}{9{,}800} \approx 0.115
]
Standard error and z will be very small, and the p-value will be large. You’d fail to reject \(H_0\): no evidence of a change in conversion rate. This is a nice reminder that even when two proportions look the same, the z-test for two proportions quantifies whether any tiny difference is statistically meaningful.
Health and policy: more real examples of z-test for proportions
Health and public policy generate some of the best examples of z-test for proportions explained with examples, because decisions are literally life-or-death.
Example 5: Side effect rates for two medications
Imagine a clinical trial comparing two blood pressure medications. According to NIH–funded studies, side effect profiles are a major factor in treatment choice.
Suppose the trial reports:
- Drug A: 900 patients, 81 report a specific side effect → \(\hat p_A = 0.09\)
- Drug B: 850 patients, 119 report the side effect → \(\hat p_B \approx 0.14\)
Question: Is the side effect rate higher for Drug B?
Hypotheses:
- \(H_0: p_A = p_B\)
- \(H_a: p_B > p_A\)
Pooled proportion:
[
\hat p_{pool} = \frac{81 + 119}{900 + 850} = \frac{200}{1{,}750} \approx 0.1143
]
Standard error:
[
SE = \sqrt{0.1143 \times 0.8857 \times (1/900 + 1/850)} \approx 0.015
]
Difference in sample proportions:
[
\hat p_B - \hat p_A = 0.14 - 0.09 = 0.05
]
z-statistic:
[
z = \frac{0.05}{0.015} \approx 3.33
]
p-value is under 0.001 (one-sided). There’s strong evidence that Drug B has a higher side effect rate. Clinicians and regulators can use this kind of two-proportion z-test alongside effect sizes and clinical judgment.
Example 6: Vaccination campaign before vs after
A public health department runs a campaign to increase HPV vaccination among teens. Using CDC vaccination coverage data as a benchmark, they track local change.
- Before campaign: 600 teens surveyed, 312 vaccinated → \(\hat p_{before} = 0.52\)
- After campaign: 650 teens surveyed, 377 vaccinated → \(\hat p_{after} \approx 0.58\)
They want to know if the campaign increased vaccination coverage.
Hypotheses:
- \(H_0: p_{before} = p_{after}\)
- \(H_a: p_{after} > p_{before}\)
Pooled proportion:
[
\hat p_{pool} = \frac{312 + 377}{600 + 650} = \frac{689}{1{,}250} \approx 0.5512
]
Standard error:
[
SE = \sqrt{0.5512 \times 0.4488 \times (1/600 + 1/650)} \approx 0.028
]
Difference in sample proportions:
[
\hat p_{after} - \hat p_{before} = 0.58 - 0.52 = 0.06
]
z-statistic:
[
z = \frac{0.06}{0.028} \approx 2.14
]
One-sided p-value is about 0.016. There’s statistically significant evidence that vaccination coverage increased after the campaign.
Again, this fits right into a set of real examples of z-test for proportions explained with examples that matter for policy and funding decisions.
Business and tech: more examples include fraud, churn, and click rates
Moving into 2024–2025, data-heavy industries keep leaning on z-tests for proportions for fast decisions. Here are more examples of z-test for proportions explained with examples that mirror what analysts actually do.
Example 7: Fraud detection rate comparison
A fintech company rolls out a new fraud detection model. They compare the proportion of fraudulent transactions correctly flagged before and after the change.
- Old model: 20,000 known frauds, 15,000 flagged → \(\hat p_{old} = 0.75\)
- New model: 18,000 known frauds, 14,760 flagged → \(\hat p_{new} = 0.82\)
Is the new model’s detection rate higher?
With these large samples, a two-proportion z-test will almost certainly show a highly significant increase. But the test quantifies that difference and backs up the case for rolling out the model globally.
Example 8: Customer churn after a pricing change
A subscription service changes pricing in early 2025 and wants to know whether the monthly churn rate increased.
- Before: 50,000 subscribers, 2,000 cancel in a month → \(\hat p_{before} = 0.04\)
- After: 48,000 subscribers, 2,640 cancel → \(\hat p_{after} = 0.055\)
Here again, a two-proportion z-test is appropriate. If the p-value is tiny (it will be), the company has statistical evidence that the new pricing is associated with higher churn and might need to rethink the strategy.
These business cases are some of the best examples of z-test for proportions explained with examples because they directly support revenue-impacting decisions.
When is a z-test for proportions appropriate (and when is it not)?
All of these examples of z-test for proportions share a few features:
- Binary outcome: success/failure, yes/no, vaccinated/not, churned/stayed.
- Independent observations: one person’s outcome doesn’t affect another’s.
- Large enough samples: so that the sampling distribution of the proportion is approximately normal.
The usual rule of thumb for a one-sample test:
- \(n p_0 \ge 10\) and \(n(1 - p_0) \ge 10\).
For a two-proportion test with pooled \(\hat p_{pool}\):
- \(n_1 \hat p_{pool} \ge 10\), \(n_1(1-\hat p_{pool}) \ge 10\), \(n_2 \hat p_{pool} \ge 10\), \(n_2(1-\hat p_{pool}) \ge 10\).
If these conditions fail—say you have only 20 observations, or the expected number of successes is tiny—you should consider an exact test like Fisher’s exact test or an exact binomial test instead of a z-test.
For medical and clinical data, organizations like Mayo Clinic and Harvard frequently discuss studies where proportions and risk differences are central. Behind the scenes, a lot of those analyses start with the same logic you see in these examples of z-test for proportions.
Common mistakes people make with z-tests for proportions
Even with the best examples of z-test for proportions explained with examples, people still fall into the same traps:
- Using a z-test with tiny samples: when expected counts are under 10, the normal approximation gets shaky.
- Forgetting to pool proportions under \(H_0\) in two-sample tests: this changes the standard error and can change your conclusion.
- Confusing statistical significance with practical impact: a 0.5% increase in click-through might be statistically significant with millions of emails, but meaningless for revenue.
- Ignoring study design: non-randomized before/after comparisons (like some campaign evaluations) can be confounded by other changes over time.
If you’re using these methods in medical or public health contexts, it’s worth checking guidelines or methods sections from sources like CDC or NIH to see how similar data are analyzed.
FAQ: quick answers about z-test for proportions
What is a simple example of a z-test for a proportion?
A simple example of a z-test for a proportion is checking whether 63% vaccination coverage in a sample of 400 adults is statistically lower than a 70% public health target. You compare the sample proportion (0.63) to the target (0.70) using a one-sample z-test.
What are common real examples of z-test for proportions in business?
Common real examples of z-test for proportions in business include A/B tests on email click-through rates, comparing conversion rates between two versions of a landing page, measuring churn before and after a pricing change, and comparing fraud detection rates for two models.
How do I know if I should use a z-test or a chi-square test for proportions?
For comparing two proportions, a two-proportion z-test and a chi-square test with 1 degree of freedom are mathematically equivalent. If you have more than two categories or groups, you typically move to a chi-square test of independence. For a single proportion vs a known value, use the one-sample z-test for a proportion (with large samples) or an exact binomial test (with small samples).
Can I use a z-test for proportions with small sample sizes?
You can, but you probably shouldn’t. If \(n p_0\) or \(n(1-p_0)\) is under 10, the normal approximation may be poor. In that situation, use an exact binomial test for a single proportion or Fisher’s exact test for a 2×2 table instead of a z-test.
Where can I see more real examples of z-test for proportions explained with examples?
You’ll see more real examples of z-test for proportions explained with examples in introductory biostatistics or epidemiology materials from universities, as well as in methods sections of public health reports from the CDC or NIH. Any context that tracks yes/no outcomes at scale—vaccination, screening, conversion, churn—will generate natural use cases.
Related Topics
Why ANOVA Hypothesis Tests Show Up Everywhere Once You Notice Them
Best examples of z-test for proportions explained with examples
Real-world examples of hypothesis test for variance in statistics
Best real-world examples of one-sample hypothesis test examples
Two-Sample Hypothesis Tests Without the Jargon Overload
Real‑world examples of chi-square test for independence examples
Explore More Hypothesis Testing Examples
Discover more examples and insights in this category.
View All Hypothesis Testing Examples