KPIs That Actually Matter

I have seen dashboards with 47 metrics, four tabs of conditional formatting, and a refresh cadence of “whenever someone remembers.” The CFO glances at it in the first week, stops opening it by the third, and by month two the team is back to building the monthly pack in a slide deck. The dashboard did not fail because the data was wrong. It failed because nobody designed it around a decision.

The finance dashboard is the most visible artifact the FP&A team produces, and the one most likely to be built backward: starting from “what data do we have?” rather than “what decisions does this audience need to make?” That inversion explains why most dashboards are accurate, complete, and ignored. When a CFO asked me to rebuild the finance dashboard from scratch last year, the first thing I did was delete 80% of the metrics. What remained actually got used.


Why Most Dashboards Fail

The failure pattern is remarkably consistent across organisations.

Too many metrics. A dashboard with 30 KPIs is not a dashboard. It is a data dump with formatting. When everything is highlighted, nothing is highlighted. The person reading it has to do the analytical work of figuring out which numbers matter, which is precisely the work the dashboard was supposed to do for them.

Wrong audience. A board member and a business unit head need fundamentally different views of the business. The board needs five to seven metrics that tell them whether the strategy is working. The BU head needs operational detail that helps them run their function. When one dashboard tries to serve both, it serves neither. The board finds it too detailed. The BU head finds it too high-level. Both stop using it.

No action triggers. A metric that does not prompt a question or a decision is decoration. Revenue is up 8% against budget. What do I do with that? Nothing, unless the dashboard tells me which segment drove it, whether the growth is sustainable, and whether the forecast has been updated. A number without context is noise.

Stale data. A dashboard that refreshes monthly is a report with extra steps. By the time it updates, the conversation has already happened and the decisions have already been made (or deferred). The dashboard arrives after the moment it was needed.

These four problems share a root cause: the dashboard was built as an output of the data infrastructure rather than as an input to the decision process.


The Audience Framework: Board vs CFO vs BU Heads

The single most useful thing I do before building any dashboard is write down, in plain language, the three to five decisions the intended audience makes on a recurring basis. The metrics follow from there.

Board-level dashboard

The board meets quarterly and makes strategic allocation decisions. They are not managing operations. They need to know: is the business growing in the right direction? Is it generating cash? Are the risks under control?

The board dashboard should have five to seven metrics, and every one of them should connect to a strategic question.

  • Revenue growth rate (trailing twelve months, not monthly, because monthly volatility obscures the trend for a quarterly audience)
  • EBITDA margin and its trajectory (with a note on Ind AS 116 distortion if applicable, a trap I covered in The Hidden Debt)
  • Cash conversion (operating cash flow as a percentage of EBITDA, because a business that generates profit but not cash has a working capital problem, not a performance story)
  • Customer concentration or revenue diversification (the board needs to know if the business is one contract away from a crisis)
  • One forward-looking metric: pipeline coverage, bookings trajectory, or forecast confidence range

That is it. Five metrics. No tabs. No drill-downs. If a board member needs more detail, it lives in the board pack appendix, not on the dashboard.

CFO dashboard

The CFO operates at a different cadence and makes different decisions: where to allocate capital within the plan, which variances require intervention, whether the forecast needs revision.

The CFO dashboard is the bridge between strategic and operational. It should refresh weekly and include:

  • Budget versus actuals for the current period, with variance classification (situational versus structural, the framework from the variance analysis article)
  • Rolling forecast versus original budget, showing where the forward view has shifted and why
  • Working capital metrics: DSO, DPO, and cash conversion cycle trending over time (not a single snapshot, because the direction matters more than the number)
  • Headcount versus plan, because in most businesses, people cost is 60 to 70 percent of the cost base and the headcount plan is the single largest budget driver
  • A variance bridge (price-volume-mix) for the top revenue line, refreshed from the monthly PVM analysis

Every metric on the CFO dashboard should connect to the four variance commentary questions: situational or structural, full-year implication, who owns the response, and has the forecast been updated? If a metric cannot trigger one of those questions, it does not belong here.

Business unit head dashboard

The BU head needs operational KPIs that help them run their function on a weekly or daily basis. This is the most granular view and the one most likely to be overloaded.

The discipline here is constraint. Each BU dashboard should have a maximum of ten metrics, divided into two categories.

Performance metrics (four to five): the KPIs that measure whether the function is delivering. For a commercial team: pipeline conversion rate, average deal size, win rate by segment. For operations: SLA adherence, cost per unit, utilisation rate.

Health metrics (three to four): the leading indicators that signal problems before they show up in the financials. For a commercial team: pipeline coverage ratio and sales cycle length. For customer success: NPS trend and logo churn rate. These tell you something is about to go wrong, not that it already has.

The BU dashboard is the only one that should include drill-down capability. The board and CFO dashboards should not. Drill-downs on a strategic dashboard encourage the wrong behaviour: executives spending time in the weeds instead of asking the right questions.


The Metric Selection Framework

Not every number that can be measured should be on a dashboard. The selection framework I use has four filters, and a metric must pass all four to earn a place on the page.

Filter 1: Is it decision-linked? Can you name the specific decision this metric informs? “Revenue” is not decision-linked unless it is broken into components that point to an action. “Enterprise segment revenue versus forecast, by region” is decision-linked because a miss in a specific region prompts a specific conversation with a specific owner.

Filter 2: Is it owned? Every metric on the dashboard needs an owner: a person who can explain why it moved and what they are doing about it. Metrics without owners become spectator sports. The team looks at the number, nobody acts on it, and it trains the organisation to treat the dashboard as a passive report rather than an active management tool.

Filter 3: Does it have a threshold? A metric without a threshold is just a number. What constitutes “good”? What triggers a review? The thresholds do not need to be on the dashboard itself, but they need to exist in the operating rhythm. A DSO of 35 is fine if your terms are 45 days. It is a problem if your terms are 21 days.

Filter 4: Can it be gamed? This filter catches the metrics that optimise for the wrong behaviour. Revenue recognised versus revenue collected. Deals closed versus deals delivered. If the metric can be maximised without improving the underlying business outcome, it needs either a companion metric that provides balance or replacement with a metric that measures the outcome directly.

A dashboard that passes all four filters for every metric will typically have eight to twelve KPIs. That is not a limitation. That is the design working correctly.


Refresh Cadence: When “Real-Time” Is the Wrong Answer

The instinct is to make everything real-time. The instinct is usually wrong.

Real-time data introduces volatility that obscures the trend. Daily revenue fluctuates based on invoicing timing, payment processing, and accounting cut-offs. A CFO watching daily revenue will react to noise that disappears in a weekly view. And real-time pipelines are expensive to build, maintain, and validate for a marginal improvement in decision quality.

The right cadence depends on the decision cycle.

Daily refresh makes sense for operational metrics where the BU head needs to act within 24 hours: cash position, collections activity, open purchase orders above a threshold.

Weekly refresh works for most CFO-level metrics: budget versus actuals (month-to-date), rolling forecast updates, working capital trends, headcount versus plan.

Monthly refresh is appropriate for strategic and board-level metrics: EBITDA margin, revenue growth rate, customer concentration, forecast accuracy tracking.

Quarterly refresh applies to structural metrics that change slowly: LTV-to-CAC ratio, market share estimates, competitive positioning indicators.

The mistake I see most often is applying a weekly refresh to metrics that only make analytical sense monthly, and a monthly refresh to metrics where the decision window is weekly. Match the cadence to the decision cycle, not to the fastest pipeline available.


The Dashboard Architecture: What Goes Where

Once you have your audience, metrics, and cadence, the layout itself matters more than most teams realise.

Lead with the answer, not the question. The top of the dashboard should show the five to seven metrics that tell the story of the period. Is the business on track? Where is it off track? What changed since last period? The reader should absorb the headline in under ten seconds. Everything below that is supporting detail.

Group by decision, not by data source. Most dashboards are organised by where the data comes from: revenue from the billing system, costs from the GL, headcount from the HRIS. That organisation makes sense to the finance team that built it and no sense to the CFO. Group metrics by the decision they support: commercial performance, cost management, cash generation, forward outlook.

Use comparators, not absolutes. Revenue of ₹4.2 Cr is not information. Revenue of ₹4.2 Cr against a budget of ₹4.5 Cr, a prior year of ₹3.8 Cr, and a rolling forecast of ₹4.3 Cr is information. Every metric should have at least two comparators: plan and prior period.

Reserve space for the narrative. A dashboard without commentary is a chart pack. The most effective dashboards I have built include a three-to-five-line text block at the top: what moved, why, and what the finance team recommends. This is the same CFO commentary discipline from the variance analysis article, applied to the dashboard surface. The commentary forces the FP&A team to have a point of view about the numbers that follow.


Connecting the Dashboard to the Operating Rhythm

A dashboard that exists outside the operating rhythm is a reporting tool. A dashboard that is embedded in the rhythm becomes a management tool.

The connection points are specific.

Weekly leadership meeting. The CFO dashboard is the agenda. Not a slide deck built from the dashboard, not a reformatted version of the dashboard. The dashboard itself. If the weekly meeting requires a separate pack, the dashboard has failed its purpose.

Monthly actuals review. The variance bridge on the dashboard should be the same bridge discussed in the monthly review. When the dashboard and the monthly pack tell different stories (because they pull from different sources or use different definitions), the organisation loses trust in both.

Forecast update cycle. When the rolling forecast is updated, the dashboard should reflect the revision automatically. This is where the dashboard connects to the rolling forecast infrastructure and the driver-based model. The dashboard is the surface layer. The driver model and rolling forecast are the engine underneath.

Quarterly board preparation. The board dashboard should require zero additional preparation beyond the quarterly data refresh. If the finance team rebuilds the board view every quarter, the dashboard is not the single source of truth it was designed to be.


What to Exclude (and Why Exclusion Is the Hard Part)

The hardest conversation in dashboard design is not about what to include. It is about what to leave out.

Every function will advocate for its metrics. Sales wants pipeline data. Product wants usage metrics. People wants engagement scores. Each is genuinely useful in its context. None necessarily belongs on the CFO or board dashboard.

The exclusion criteria mirror the selection framework. If a metric is not decision-linked for this audience, it does not belong. If it does not have an owner at this level, it does not belong. If it cannot trigger an action within the dashboard’s cadence, it does not belong.

The practical move is to maintain three tiers. Tier 1 is the dashboard: the eight to twelve KPIs that earn a place on the primary view. Tier 2 is the drill-down or appendix: metrics that provide context when a Tier 1 metric signals a problem. Tier 3 is the deep-dive pack: the detailed analysis that lives in a separate document and gets pulled when a specific investigation is warranted.

The variance analysis article covers the relationship between what belongs in a management dashboard versus a deep-dive pack. The monthly actuals review is where Tier 1 metrics get discussed. The deep-dive pack is where Tier 3 analysis gets done when a Tier 1 metric triggers a question. Conflating the two is how a dashboard ends up with 47 metrics.


Building the Dashboard: A Practical Sequence

If I were building a finance dashboard from scratch for a mid-stage business, this is the sequence I would follow.

Step 1: Interview the audience. Not “what metrics do you want?” but “what decisions do you make every week, and what information do you need to make them well?” When I ran this exercise with a leadership team, the CEO told me he only looked at three numbers each week. The dashboard the team had built showed thirty. The answers are almost always simpler than the dashboard the team was planning to build.

Step 2: Map metrics to decisions. For each decision, identify the one or two metrics that directly inform it. Apply the four-filter framework. Cut anything that does not pass.

Step 3: Define comparators and thresholds. For every surviving metric, define what “good” looks like, what triggers a review, and the relevant comparators (budget, prior year, rolling forecast).

Step 4: Design the layout. Lead with the summary. Group by decision. Include comparators. Reserve space for the narrative block.

Step 5: Build the data pipeline. This is deliberately not Step 1. The data infrastructure serves the dashboard design. If you start with the data, you build a dashboard that shows what the data can display rather than what the audience needs to see.

Step 6: Run a pilot cycle. Use the dashboard for one full operating period. Watch which metrics the audience references in conversation and which they ignore. After one cycle, cut the ignored metrics.


The finance dashboard is not a reporting exercise. It is a design problem, and the design constraint is not the data or the tooling. It is the discipline to put fewer things on the page and make each of them connect to a decision that someone in the room needs to make.

If you are building or rebuilding a finance dashboard and want to think through the metric selection or the audience framework, I would genuinely enjoy that conversation. Let’s connect.

Series Insight

Part of my series on FP&A

Practical FP&A frameworks: variance bridges, driver-based budgeting, rolling forecasts, and the analytical muscle to move a finance team from reporting history to shaping strategy.

View all articles in this series →

Work through this with me

I run focused learning cohorts on FP&A frameworks, financial modelling, and the CA-to-CFO transition. Small groups, real problems, practical output.

Join the Cohort