Scenario Planning That Gets Used

Most scenario planning exercises end the same way. The finance team builds three cases (base, upside, downside), presents them in a board pack, and watches them gather dust for the rest of the year. The base case becomes the operating plan. The upside case gets a polite nod. The downside case makes everyone uncomfortable and is quietly set aside.

By Q2, nobody references any of the scenarios. By Q3, the business has moved into conditions that none of the three cases anticipated. The exercise produced work. It did not produce decisions.

I have built scenario frameworks that changed capital allocation decisions, and I have built ones that produced nothing but a prettier slide deck. The difference was never the model. It was always the design of the scenarios themselves, what they tested, how they connected to the decisions leadership was actually facing, and whether the output gave someone a reason to act differently.


Why Most Scenario Planning Produces Shelf-Ware

The standard approach to scenario planning starts with a financial model and asks: what if things go better or worse? Revenue gets adjusted up by 15% for the upside case and down by 15% for the downside case. Costs get scaled proportionally. Three columns appear in the P&L, and the finance team presents the range.

The problem is not the math. The problem is the design.

Uniform percentage adjustments to financial line items are not scenarios. They are sensitivity analysis dressed up as strategic thinking. A real scenario describes a set of business conditions: a market shift, a competitive move, a regulatory change, a demand pattern that differs meaningfully from the base assumption. Each of those conditions has specific consequences for specific drivers. Revenue might decline while costs remain fixed. Churn might accelerate while new bookings hold. The margin structure might change even if the topline does not.

The second problem is relevance. Most scenario exercises are designed around the question “what could go wrong?” when leadership is actually asking “what should we do if conditions change?” Those are different questions. The first produces a range of outcomes. The second produces a decision framework. Only the second one gets used.

The third problem is timing. Scenarios built during the annual planning cycle and never revisited become stale within weeks. The business environment that generated the assumptions has already shifted, and the scenarios no longer reflect the conditions leadership is thinking about. A scenario framework that is only useful in December is not a planning tool. It is a planning exercise.


Starting from Decisions, Not from the Model

The most effective scenario frameworks I have built started with a conversation, not a spreadsheet.

Before touching the model, I sit with the CFO and the leadership team and ask one question: what are the two or three decisions you expect to face in the next twelve months where the answer depends on how conditions unfold? The responses are always specific. Should we hire ahead of demand or wait for the pipeline to confirm? Should we invest in a new market now or defer until the existing market stabilises? Should we extend customer payment terms to protect volume or tighten them to protect cash?

Each of those decisions has a set of conditions under which the answer changes. Hiring ahead of demand makes sense if conversion rates hold and the pipeline grows at 10% quarter-over-quarter. It does not make sense if the pipeline flattens or conversion rates drop by 200 basis points. The scenario is not “revenue goes up” or “revenue goes down.” The scenario is: the pipeline grows but conversion weakens, and here is what that means for headcount timing, cash runway, and full-year EBITDA.

This is why a driver-based model is the prerequisite for scenario planning that works. When the budget is built on operational drivers (pipeline volume, conversion rate, churn, headcount ramp, cost per acquisition), you can stress-test those drivers individually and in combination. When the budget is built on line items with percentage growth rates, the only scenario you can run is “what if the percentage is different,” which tells leadership nothing they could not have guessed.


Structuring the Three Cases

The base case, upside case, and downside case framework is fine as a starting point. The discipline is in how you define each one.

Base case: the operating plan. This is the set of assumptions the business is managing toward. Pipeline conversion holds at the trailing six-month average. Churn matches the renewal rate from the most recent cohort data. Headcount follows the approved hiring plan. Cost inflation runs at 4%. The base case is not optimistic or pessimistic. It is the forward view that the leadership team has agreed to operate against, built on the best available data.

Downside case: the stress test. This is not “everything goes wrong.” It is the specific set of conditions that would force a change in the operating plan. The question to ask is: what combination of driver movements would require us to make a decision we are not currently planning to make? That might be a hiring freeze, a deferral of the market expansion, a reduction in discretionary spend, or a renegotiation of payment terms. The downside case should trigger a specific action, not just produce a smaller number.

Upside case: the acceleration test. This is the set of conditions under which the business should invest faster than the base case assumes. The question is: if demand comes in stronger than planned, are we ready to capture it? The upside case should identify the capacity constraints (hiring speed, infrastructure limits, working capital requirements) that would prevent the business from capturing the opportunity, and quantify the investment required to remove them.

The discipline in structuring these cases is specificity. Each case should describe a narrative, not just a number. “Revenue declines by 15%” is a sensitivity test. “Enterprise pipeline slows by 20% because the market contracts, but SMB volume holds because our pricing is competitive in a downturn, and the mix shift compresses blended ACV by 8%” is a scenario. The first gives you a range. The second gives you a plan.


What to Stress-Test and What to Leave Alone

Not every driver deserves a scenario. The point is to test the drivers that have the most influence on the decisions leadership needs to make and the most uncertainty in their forward trajectory. Testing everything produces noise. Testing the wrong things produces irrelevance.

I use a simple two-by-two to prioritise: impact on EBITDA (or cash, depending on what the business is managing toward) on one axis, and uncertainty in the forward assumption on the other.

High impact, high uncertainty: These are the drivers that belong in the scenario framework. Pipeline conversion rate, customer churn, average contract value, key input costs. A 200 basis point shift in any of these changes the full-year outcome materially, and the business does not have enough data to predict the direction with confidence.

High impact, low uncertainty: These belong in the base case as fixed assumptions. Rent, committed contracts, regulatory costs. They move the P&L but they are known, so building scenarios around them adds complexity without insight.

Low impact, high uncertainty: These are distractions. Office supply costs might be volatile, but a 30% swing does not change any decision. Leave them at base case.

Low impact, low uncertainty: Ignore these entirely. They are stable and immaterial.

The practical output is a short list: typically three to five drivers that genuinely matter. For a SaaS business, that list usually includes net revenue retention, pipeline conversion rate, sales cycle length, and headcount ramp speed. For a manufacturing business, it might be raw material costs, capacity utilisation, and order book velocity. For a professional services firm, it is utilisation rate, average bill rate, and project pipeline.


The Scenario Output That Leadership Actually Reads

The most common mistake in presenting scenario outputs is showing three full P&Ls side by side. Leadership does not want to compare sixty line items across three columns. They want to know three things: what changes, by how much, and what should we do about it.

The format I have found most effective is a single page with four elements.

First: the decision map. A plain-language summary of the decisions each scenario triggers. “If enterprise pipeline slows by 20%: defer Q3 hiring for the commercial team (saves 8 headcount, preserves 14 months of runway). If pipeline holds and conversion improves: accelerate the market expansion by one quarter (requires 6 additional hires and a working capital injection of X).” The decisions come first because they are the reason the scenarios exist.

Second: the key driver comparison. A table showing the three to five drivers that vary across scenarios, with the base case value, the downside value, and the upside value. No P&L. Just the inputs that are changing, with the rationale for each assumption in a single sentence.

Third: the financial impact summary. Revenue, EBITDA, and cash position under each scenario. Three numbers, three scenarios. Not a full P&L. If leadership needs the detail, it lives in the appendix.

Fourth: the trigger indicators. Observable, measurable signals that tell leadership which scenario is unfolding. “If pipeline coverage drops below 2.5x for two consecutive months, we are tracking toward the downside case.” “If net revenue retention exceeds 110% in the next quarter, the upside case is in play.” These are the indicators the finance team monitors monthly and reports against, so the scenarios stay alive beyond the planning cycle.

The trigger indicators are what separate a living scenario framework from a one-time exercise. Without them, the scenarios are a snapshot. With them, the scenarios are a monitoring tool that connects the planning process to the monthly operating rhythm.


Keeping Scenarios Alive After the Planning Cycle

The annual planning cycle produces scenarios in December. By March, the assumptions are stale. The business has moved, new information has emerged, and the scenarios reflect a world that no longer exists.

The fix is to connect scenario monitoring to the monthly forecast process. When the rolling forecast updates each month, the scenario triggers should be reviewed alongside the forecast assumptions. Are we tracking toward the base case or drifting toward the downside? Has a trigger indicator been breached? Does the current trajectory require an escalation to the CFO?

This is not a full scenario rebuild every month. The structure stays the same. The driver assumptions in the base case update with the forecast. The gap between the updated base case and the downside or upside case narrows or widens. The finance team’s job is to report that gap, not to rebuild the entire framework.

In practice, I review the trigger indicators monthly as part of the forecast commentary and do a full scenario refresh quarterly. The quarterly refresh allows for structural changes: new drivers, revised scenario narratives, updated decision maps. The monthly review keeps the framework connected to the operating rhythm between refreshes.


Sensitivity Analysis vs. Scenario Planning: The Distinction That Matters

Sensitivity analysis and scenario planning are related but different tools, and using one when you need the other is a common source of wasted effort.

Sensitivity analysis tests the impact of a single variable changing while everything else holds constant. What happens to EBITDA if churn increases by 2 percentage points? What happens to cash runway if DSO extends by 10 days? These are useful for understanding the sensitivity each driver has on the financial model. They answer the question: how exposed are we to this specific risk?

Scenario planning tests the impact of a coherent set of conditions changing together. A market downturn does not just increase churn. It also slows the pipeline, extends the sales cycle, puts pressure on pricing, and may increase input costs if suppliers are facing the same environment. A scenario captures these interdependencies. Sensitivity analysis does not.

The practical implication is that sensitivity analysis is a building block for scenario design. Run the sensitivities first to understand which drivers have the most financial impact. Then combine those drivers into coherent scenarios that describe a plausible business environment. The sensitivities tell you what matters. The scenarios tell you what to do about it.


The Honest Limitation

Scenario planning does not predict the future. It prepares the organisation to respond faster when the future reveals itself.

The value is not in the accuracy of the scenarios. It is in the fact that leadership has already thought through what they would do if conditions change, agreed on the triggers that would prompt action, and identified the investments or cuts that each scenario requires. When the trigger is breached, the response is a confirmation of a plan that already exists, not a scramble to build one under pressure.

The limitation is that scenarios are only as good as the imagination and discipline of the team that builds them. A finance team that designs scenarios around the risks leadership is comfortable discussing will miss the risks that actually materialise. The downside case has to include conditions that are genuinely uncomfortable, not just a smaller version of the base case. That requires a finance function with enough credibility and independence to push leadership toward the questions they would rather not answer.


The rolling forecast article covers how the monthly update process works, and the driver-based budgeting piece explains the model architecture that makes scenario planning possible in the first place. The three articles form a connected sequence: build the model on drivers, stress-test the drivers through scenarios, and keep both current through a rolling forecast process.

If you are building a scenario framework or trying to get more traction from the one you already have, I would love to hear what is working and where it stalls. Let’s connect.

Series Insight

Part of my series on FP&A

Practical FP&A frameworks: variance bridges, driver-based budgeting, rolling forecasts, and the analytical muscle to move a finance team from reporting history to shaping strategy.

View all articles in this series →

Work through this with me

I run focused learning cohorts on FP&A frameworks, financial modelling, and the CA-to-CFO transition. Small groups, real problems, practical output.

Join the Cohort