Most finance teams spend three months building an annual budget and the next nine explaining why actuals differ from it. The budget is not wrong — the business moved, and the budget had no mechanism to move with it.

The market moved. A key hire did not happen on schedule. A product launch slipped by six weeks. A large customer renewed at a different contract value than planned. None of these are failures — they are the normal conditions of a business operating in a real environment. But the annual budget, locked in December for a twelve-month horizon, has no mechanism for absorbing them. It just becomes progressively less relevant as the year unfolds.

By February, most finance teams are managing against a budget they know is wrong. By Q3, the variance commentary has become an exercise in explaining why actuals differ from a plan that no longer reflects how the business is being run. The annual budget has become a historical artefact rather than a planning tool.

The rolling forecast replaces the fixed structure rather than patching it.


What a Rolling Forecast Is — and What It Is Not

A rolling forecast maintains a fixed forward horizon — typically twelve months — that moves with time. When January closes, the forecast does not shrink to an eleven-month view. It extends to include the following January, keeping the planning horizon constant regardless of where you are in the calendar year.

This sounds simple. The implications are significant.

A rolling forecast is not a reforecast of the annual budget. Reforecasting updates the current year’s numbers when actuals deviate materially from the original plan. It is a patch on a fixed structure. A rolling forecast replaces the fixed structure entirely. There is no “budget year” that the forecast is anchored to. There is only a continuously updated view of the next twelve months, built on the best available assumptions at any given point in time.

A rolling forecast is also not a licence to change every number every month. The discipline of a good rolling process is knowing what to update and what to leave stable. Locking actuals as they close, updating the near-term quarters with current information, and maintaining a directional view of the outer months without false precision — that is the operational rhythm that makes a rolling forecast useful rather than chaotic.


The Mechanics: How a Rolling Forecast Works

The rolling forecast operates on a lock-and-extend logic each month.

When a period closes, actuals replace the forecast for that period. The forecast does not look back. It looks forward from the most recent closed period, with the horizon extending by one month to maintain the twelve-month window.

\[\text{Rolling Horizon} = \text{Last Closed Month} + \text{12 Months Forward}\]

In practice, the update cycle has three distinct layers of precision.

Near term (months 1–3): Updated with current information — revised headcount plans, confirmed pipeline, known cost changes, closed deals. This layer should reflect what the business actually expects to happen, not what the original budget assumed.

Mid term (months 4–6): Updated for known structural changes — strategic decisions that have been made, market shifts that are confirmed, cost commitments that are locked. Beyond that, the original driver assumptions hold unless there is specific information to change them.

Outer term (months 7–12): Directional. The driver-based structure carries these months forward from the mid-term assumptions. Precision here is neither achievable nor useful. The outer months exist to maintain the horizon, not to commit to a specific number.

This layered precision is important because it prevents two opposite failure modes. Updating everything every month produces noise and erodes the credibility of the forecast as a planning tool. Never updating the outer months produces a forecast that is current near-term and stale at the horizon — which defeats the purpose of maintaining the rolling window.


Locking Actuals: The Discipline That Holds the Process Together

The integrity of a rolling forecast depends on a clean separation between what has happened and what is expected to happen.

When a month closes, the actuals for that month are locked. The forecast does not revise them. The variance between forecast and actuals for the closed period is captured in the variance analysis — it becomes the diagnostic tool I covered in Variance Analysis: Making the Monthly Actuals Review Actually Useful. But the forecast itself moves forward without looking back.

This discipline matters for two reasons. First, it preserves forecast accuracy tracking — if you revise closed periods in the forecast, you lose the ability to measure how well the forecasting process is working. Second, it forces the finance team to make a deliberate decision each month: does this variance change our forward view, or was it situational? That decision is where the forecasting judgement lives.

\[\text{Forecast Accuracy} = 1 - \frac{|\text{Forecast} - \text{Actual}|}{\text{Actual}}\]

Tracking forecast accuracy by period and by driver — not just in aggregate — is what allows a finance team to systematically improve the quality of their assumptions over time. A team that is consistently over-optimistic on enterprise pipeline conversion has a specific problem to solve. A team that looks at aggregate forecast accuracy misses this.


Rolling Forecasts and Driver-Based Models

A rolling forecast without a driver-based model is a spreadsheet update exercise. The forecast changes because someone manually revised the numbers. There is no structural reason for the change, and the next month requires the same manual intervention.

A rolling forecast built on a driver-based model is different. When the sales leader updates the pipeline conversion rate assumption, the revenue forecast for months 3 through 12 updates automatically. When the people team revises the hiring plan, the headcount costs, benefits, and associated overheads cascade through the model. The finance team’s role shifts from entering numbers to validating assumptions — which is a fundamentally more valuable activity.

I covered the construction of driver trees in Driver-Based Budgeting: Moving Beyond Line-Item Extrapolation. The same driver tree that structures the annual budget is the engine of the rolling forecast. The two processes share an architecture — which means that improving one improves the other, and the conversations with business leaders about assumptions happen once rather than twice.


The Governance Question: Who Updates What and When

The most common failure in rolling forecast implementation is governance, not mechanics. The model works. The process does not.

A rolling forecast requires clear answers to three questions before the first cycle runs.

Who owns each driver assumption? Revenue assumptions are owned by the commercial team, not the finance team. Headcount assumptions are owned by the people function and validated by department heads. Cost assumptions split between finance (fixed infrastructure) and function heads (variable opex). If finance owns all assumptions, the forecast reflects the finance team’s view of the business rather than the business’s view of itself — and it will be ignored accordingly.

What triggers an assumption update outside the monthly cycle? A major deal win, a senior departure, a market shock — these should trigger an off-cycle update rather than waiting for the next monthly close. The rolling forecast needs a decision rule for when ad hoc updates are warranted and who authorises them.

How is forecast accuracy tracked and reviewed? Accuracy by driver, by time horizon, and by business unit. Reviewed quarterly with the CFO and the relevant business owners. Without this feedback loop, the forecast improves only by accident.


Rolling Forecast vs. Annual Budget: The Honest Trade-Off

The case for replacing the annual budget with a rolling forecast is strong in environments with meaningful uncertainty — growth-stage businesses, businesses in volatile markets, businesses where the operating model is changing. The rolling forecast keeps the planning tool connected to business reality rather than anchoring decisions to assumptions made twelve months ago.

The honest limitation is process cost. A well-run rolling forecast requires more frequent engagement from business leaders, a more sophisticated model infrastructure, and a finance team that can update assumptions intelligently rather than mechanically. In a business without those conditions, a rolling forecast produces neither the commitment of an annual budget nor the currency of a genuine forward view. It produces a number that changes every month for reasons nobody fully understands.

The transition to rolling forecasting also has a political dimension. Annual budgets create a shared commitment that organisations use to hold teams accountable. Rolling forecasts can feel, to business leaders accustomed to annual targets, like a moving goalpost. Managing that perception — making clear that the rolling forecast is a planning tool, not an excuse to avoid accountability — is part of the finance team’s job in the transition.

The question worth asking before committing to a rolling forecast is whether the business’s planning problems are caused by the static nature of the budget or by the quality of the assumptions in it. A rolling forecast solves the first problem. A driver-based model with better assumption governance solves the second. Most businesses that struggle with their planning process have both problems — which is why the two problems are worth solving together rather than sequentially.


The next article in this series covers zero-based budgeting — when the clean-sheet approach to cost planning makes sense, when it does not, and how it complements rather than replaces the driver-based and rolling forecast infrastructure.

I would love to hear where your planning process sits right now — whether you are still running on an annual budget, experimenting with rolling forecasts, or somewhere in between. Let’s connect.