Examples of Stationarity in Time Series: 3 Practical Examples You’ll Actually Use
Before any definitions, let’s start with concrete data. The best examples of stationarity in time series are the ones you’ve probably seen but never labeled.
Think about these situations:
- The daily log-returns of a stock index over a year: wildly noisy, but the volatility looks roughly stable.
- The number of support tickets per hour at a mature SaaS product on a typical weekday: noisy around a stable average.
- The minute-by-minute temperature inside a climate-controlled lab: small fluctuations around a target value.
All three look messy, but their behavior doesn’t systematically drift up or down over time. Their average, variance, and autocorrelation pattern are reasonably stable. These are classic examples of stationarity in time series, even if the raw levels (like stock prices or cumulative tickets) are not.
That’s the key mental shift: a time series can be non-stationary in its level, but stationary after a simple transformation like differencing or taking log-returns.
Why analysts obsess over stationarity
Stationarity isn’t academic nitpicking. Most standard time series tools quietly assume that your data is at least weakly stationary:
- AR, MA, and ARMA models
- ARIMA (after differencing)
- Many forecasting workflows in R, Python, and commercial tools
If your series isn’t stationary, parameter estimates can drift, confidence intervals get misleading, and forecasts become unreliable. This is why every serious workflow includes a check for stationarity using visual inspection, autocorrelation plots, and formal tests like the Augmented Dickey–Fuller (ADF) or KPSS tests.
For a solid theoretical overview of stationarity and unit roots, the Federal Reserve Bank of St. Louis provides accessible macroeconomic time series material: https://research.stlouisfed.org.
3 practical examples of stationarity in time series
Let’s walk through three core case studies. Each one illustrates how stationarity shows up in real work, and how you can turn a non-stationary series into a stationary one.
1. Financial markets: stock prices vs. returns
If you chart the daily closing price of the S&P 500 over the last decade, you’ll see a clear upward drift plus big shocks (pandemic, inflation spikes, rate hikes). That price series is not stationary: its mean changes over time, and variance can cluster.
But now transform prices into log-returns:
\[ r_t = \log(P_t) - \log(P_{t-1}) \]
Those log-returns bounce around zero and often behave approximately stationary over medium horizons:
- The mean of log-returns is roughly stable (slightly positive over long periods).
- The variance is more stable than the raw price series, although volatility clustering still appears.
- The autocorrelation of returns is usually close to zero at most lags for liquid markets.
In practice, analysts treat daily log-returns as an example of stationarity in time series and then model volatility (which may be non-stationary) separately using GARCH-type models.
This distinction matters for risk management and forecasting. For example:
- Value-at-Risk (VaR) models often assume stationary returns.
- Many event studies in finance depend on the idea that “normal” returns follow a stable process.
If you pull daily S&P 500 data from 2014–2024 and run an ADF test in Python, you’ll usually reject the null of a unit root for log-returns, but not for prices. That’s exactly the kind of real example you want to keep in mind when you think about stationarity.
2. Web analytics: stable user behavior after a product matures
Consider a mid-size e‑commerce site in 2024. During its hyper-growth phase (say 2020–2022), daily active users (DAU) might show a strong upward trend plus weekly seasonality (weekends vs weekdays). That raw DAU series is clearly non-stationary.
Now imagine the product matures in 2023–2024. User growth slows, marketing stabilizes, and the site reaches a relatively steady audience size. If you look at hourly page views for a single weekday pattern (for example, all Mondays in 2024), you might see:
- A consistent intraday shape (morning peak, lunchtime dip, evening peak).
- Similar overall level and variance from week to week.
If you subtract the average intraday pattern (a simple seasonal adjustment), the residuals often behave like a stationary time series:
- Mean close to zero.
- Stable variance across the year.
- Autocorrelation that decays reasonably fast.
This is one of the most practical examples of stationarity in time series for data scientists working on A/B tests or anomaly detection:
- You model the stationary residuals with ARIMA or similar.
- You flag anomalies when residuals exceed expected bounds.
In other words, the raw data is non-stationary, but the “behavior after accounting for known patterns” is a strong example of stationarity.
3. Environmental and sensor data: stability under control
Sensors are another great source of real examples of stationarity in time series. Think about a hospital ICU or a research lab:
- In a hospital, patient vital signs like heart rate or blood pressure fluctuate but can be relatively stable over short windows when the patient is stable.
- In an environmental lab, a climate-controlled chamber might keep temperature at 72°F with small random fluctuations.
The raw ambient outdoor temperature is non-stationary: strong daily cycles, seasonal patterns, and long-term climate trends. But the difference between indoor and outdoor temperature in a well-designed building can be closer to stationary over short periods—especially when HVAC systems are tuned to hold a constant offset.
From 2024 building automation datasets, you’ll often see:
- Room temperature readings varying within a narrow band (e.g., 71–73°F).
- No long-term drift as long as the system is working correctly.
Engineers treat those readings as approximately stationary and monitor them for sudden shifts that could indicate equipment failure. The stationary assumption lets them set control limits and detect anomalies using classical statistical process control.
For foundational context on environmental and climate time series, the National Oceanic and Atmospheric Administration (NOAA) offers extensive data and documentation: https://www.noaa.gov.
Beyond 3: more real examples of stationarity in time series
The title promises “examples of stationarity in time series: 3 practical examples,” but in real work you’ll see many more. Here are additional contexts where stationarity either holds directly or appears after a simple transformation.
Stationary behavior in call centers and operations
Call centers, help desks, and logistics operations often provide some of the best examples of stationary time series once you condition on known patterns.
Consider a customer support center:
- Calls per 5-minute interval during a weekday, after removing predictable daily and weekly patterns, often fluctuate around a stable mean.
- The distribution of inter-arrival times between calls can be modeled as a stationary process over months, especially in mature operations.
Operations teams use this stationary structure to:
- Forecast staffing needs.
- Detect outages (a sudden drop to near-zero calls) or major incidents (a spike far above normal variance).
These are real examples of stationarity that directly connect to staffing decisions and service-level agreements.
Manufacturing quality metrics
In modern manufacturing, many quality metrics are explicitly engineered to be stationary when the process is in control:
- Thickness of a material measured every minute.
- Number of defects per batch.
- Vibration intensity of a machine at fixed intervals.
Under stable operating conditions, these metrics should look like stationary time series: stable mean, stable variance, predictable autocorrelation. When they stop being stationary, that’s often the first sign of a problem.
This idea underpins Statistical Process Control (SPC) and control charts, which are widely taught in engineering and operations programs. For deeper reading on SPC and stationary processes, see resources from the National Institute of Standards and Technology (NIST): https://www.nist.gov.
Health and epidemiology: rates vs. counts
Public health data provides another useful example of stationarity vs. non-stationarity.
Raw weekly disease case counts (for example, flu cases) are heavily seasonal and often trending, especially during unusual years like 2020–2021. That series is non-stationary.
But if you:
- Adjust for population size (turn counts into rates), and
- Remove seasonal patterns (for example, using seasonal differencing or regression with seasonal indicators),
the residual series can behave like a stationary time series. Epidemiologists then model those residuals to detect unexpected outbreaks or shifts.
For authoritative disease surveillance examples, the Centers for Disease Control and Prevention (CDC) maintains detailed time series data: https://www.cdc.gov.
How to check if your series is (approximately) stationary
So far we’ve focused on examples, but when you’re actually working with data, you need a quick checklist.
In practice, analysts use a combination of:
1. Visual inspection
Plot the series and ask:
- Does the mean drift over time?
- Does the variance increase or decrease?
- Do you see clear trends or seasonality?
If yes, your series probably isn’t stationary in its raw form.
2. Autocorrelation and partial autocorrelation
Look at the autocorrelation function (ACF) and partial autocorrelation function (PACF). For a stationary series, the ACF typically decays relatively quickly. Persistent, slowly decaying autocorrelation often signals non-stationarity.
3. Formal tests
Common tests include:
- Augmented Dickey–Fuller (ADF): null hypothesis is a unit root (non-stationary). A small p-value lets you reject non-stationarity.
- KPSS test: null hypothesis is stationarity. A small p-value suggests non-stationarity.
Using both gives you a more nuanced picture. For example, if ADF fails to reject non-stationarity and KPSS rejects stationarity, your series is very likely non-stationary.
Making non-stationary series more stationary
Many of the best examples of stationarity in time series are not raw data; they’re transformed data. Common transformations include:
- Differencing: Use first differences (\( y_t - y_{t-1} \)) to remove trends. This is the “I” in ARIMA.
- Seasonal differencing: Use \( y_t - y_{t-s} \) to remove seasonality with period \( s \) (for example, 7 for daily data with weekly seasonality).
- Log or Box–Cox transforms: Stabilize variance.
- Detrending via regression: Fit a trend and subtract it; work with residuals.
You saw this pattern in our earlier examples:
- Stock prices → log-returns (difference of logs) → more stationary.
- Web traffic with weekly pattern → subtract average weekly shape → residuals close to stationary.
- Disease counts → adjust for population and seasonality → stationary residuals.
The workflow in 2024–2025 hasn’t changed much: modern libraries just automate parts of this. Tools like pmdarima in Python or forecast in R still rely heavily on the concept of stationarity under the hood.
FAQ: common questions about stationarity and examples
Q1. Can you give a simple example of a stationary time series?
Yes. A classic textbook example of stationarity in time series is a white noise process: a sequence of independent, identically distributed random variables with mean zero and constant variance. In real life, high-frequency measurement noise from a stable sensor often approximates this.
Q2. Are all financial time series stationary if I take returns?
No. Returns are often treated as stationary, but not always. Volatility can change over time, structural breaks can occur, and regime shifts (for example, before vs. after a major policy change) can violate stationarity. That’s why analysts also model volatility separately and test for breaks.
Q3. What are common real examples of non-stationary time series?
Examples include raw stock prices, GDP, inflation indices, population counts, web traffic during rapid growth, and climate variables like global average temperature. These typically show trends, structural breaks, or evolving seasonality.
Q4. Why do ARIMA models care so much about stationarity?
ARIMA models assume that the underlying process is stationary after differencing. Stationarity ensures that model parameters (like AR and MA coefficients) don’t change over time, which makes forecasting mathematically tractable and statistically reliable.
Q5. How many differences should I take to achieve stationarity?
Usually, one or two levels of differencing are enough. Over-differencing can introduce unnecessary noise and distort autocorrelation. Analysts often combine visual inspection, ADF/KPSS tests, and information criteria (AIC, BIC) to choose the appropriate order.
Bringing it together
If you remember nothing else, remember this: the most useful examples of stationarity in time series are almost never the raw data. They’re carefully transformed versions that strip away trend, seasonality, and structural shifts until what’s left has a stable mean, variance, and autocorrelation structure.
The three core case studies—stock returns, stabilized web traffic residuals, and controlled sensor readings—are the kinds of real examples you’ll see again and again. Once you can recognize those patterns, you’ll know when your favorite models are on solid ground, and when your time series is quietly breaking all the assumptions under the hood.
Related Topics
Real-world examples of autocorrelation function (ACF) in time series
Examples of Cointegration in Time Series: 3 Practical Examples You’ll Actually Use
Real-world examples of moving averages in time series analysis
Real-world examples of partial autocorrelation function (PACF)
Examples of Stationarity in Time Series: 3 Practical Examples You’ll Actually Use
Real-world examples of R time series analysis (with code-style walkthroughs)
Explore More Time Series Analysis Examples
Discover more examples and insights in this category.
View All Time Series Analysis Examples