Risk Mitigation in the Age of AI
The question I keep hearing from finance leaders is not “Should we use AI?” but “How do we use it without losing control?” That is the right question, and the answer is a finance leadership problem, not a technology one.
AI is already inside most finance functions, even where nobody has made a formal decision to adopt it. The forecasting tool uses machine learning to weight historical patterns. The accounts payable platform flags anomalous invoices using a classification model. The FP&A team’s variance commentary tool suggests explanations based on prior-period narratives. These are not pilot programmes. They are production systems making decisions (or informing decisions) that flow into the financial statements, the board pack, and the capital allocation process.
The CFO’s job is not to understand how the models work at a technical level. It is to understand what happens when they are wrong, who is accountable, and what governance structure ensures that the organisation knows the difference between a decision made by a human using AI and a decision made by AI with a human’s name on it.
Model Risk: The Finance Leader’s Blind Spot
Model risk in finance is not new. Every DCF, every credit scoring framework, every three-statement model carries model risk. Finance professionals have managed it for decades through review processes, assumption documentation, and the professional judgement of the person who built the model.
AI models introduce a different kind of model risk. The traditional financial model is transparent: you can trace every output back to an input and an assumption. Many AI models do not work this way. A machine learning model that predicts customer churn or flags potentially misstated transactions may produce accurate outputs without providing an interpretable explanation of why. The model’s “reasoning” is encoded in weights and parameters that do not map to business concepts a human can interrogate.
This matters for a specific reason: when the model is wrong (and all models are wrong eventually) the organisation needs to understand why, how to correct it, and how to prevent the same failure from recurring. With a traditional model, the post-mortem is straightforward. With a black-box AI model, the post-mortem may produce no actionable insight. The model was wrong, and nobody can explain why.
Model explainability is not a technical nice-to-have. It is a governance requirement. A CFO who takes accountability for financial reporting integrity cannot delegate that accountability to a model that cannot explain itself.
Audit Trail Integrity: The Non-Negotiable
Every number in the financial statements traces back through a chain of transactions, adjustments, and judgements to a source document. That chain is what the auditor tests, what the regulator examines, and what gives the board confidence that the numbers reflect economic reality.
AI introduces a specific threat to this chain: the automated adjustment. When an AI system reclassifies a transaction, adjusts a provision estimate, or recommends an accrual, it creates a link in the audit trail that may not have a human decision behind it. The adjustment exists, but the rationale may remain opaque.
The finance team needs to log every AI-driven adjustment with the same rigour as a manual journal entry: what changed, what triggered the change, what model produced the recommendation, and who approved it. “The system did it” is not an acceptable audit trail entry. “The system recommended it, and the Finance Controller reviewed and approved it based on the following criteria” gives you a defensible position.
The SOX framework provides a useful mental model here. When an AI system influences financial reporting decisions, it becomes part of the control environment. The same questions apply: Is the control designed to prevent or detect material misstatement? Is it operating effectively? Is there evidence of review and approval? A CFO who builds AI governance on top of the existing internal controls framework (rather than treating it as a separate technology initiative) will find that the foundation is already there.
When to Automate and When to Keep Human Judgement
The framework I find most useful separates finance tasks along two dimensions: the cost of being wrong and the availability of structured data.
High data quality, low cost of error: Automate aggressively. Transaction matching, bank reconciliation, invoice processing. These are high-volume processes where the consequence of an individual error is small and human review shifts to exceptions only.
High data quality, high cost of error: Automate the preparation, keep human judgement on the decision. Revenue recognition under Ind AS 115 or ASC 606 is a good example. An AI model can identify contract terms and suggest the performance obligation allocation, but the final judgement about whether to recognise revenue over time or at a point in time should sit with a human who understands the commercial context and the standard’s intent.
Low data quality, high cost of error: Do not automate. Scenario planning, strategic capital allocation, restructuring decisions. AI can support the analysis (running sensitivity scenarios faster, surfacing historical precedents) but the decision itself should be unambiguously human.
Low data quality, low cost of error: Experiment freely. Draft commentary, preliminary data exploration, internal process improvements. This is where the finance team builds intuition about what AI does well and where it struggles.
This framework matches the level of human oversight to the level of risk, which is the same principle that underpins internal controls design.
Governance That Works
The AI governance frameworks that actually work share three characteristics.
Embedded, not parallel. AI risk is a subset of operational risk. It belongs in the enterprise risk register alongside cyber risk, process risk, and people risk. A separate AI governance structure will become an orphan within a year.
Accountable, not collective. Every AI model that influences a financial reporting outcome needs an owner. Not a committee. A named individual who reviews outputs on a defined cadence and has the authority to override or decommission the model. This mirrors how the finance function already manages key judgements: someone signs off, and that signature means something.
Simple and followed, not elaborate and ignored. The best AI governance framework I have seen was a single-page decision tree. Does the model influence financial reporting? If yes, it requires Finance Controller sign-off, quarterly performance review, and annual independent validation. If no, it requires function-head approval and annual review. That is the entire framework, and people actually use it.
The CFO’s Real Question
The technology will keep advancing. None of that changes the CFO’s fundamental obligation: to ensure that financial reporting is reliable, governance structures are sound, and when a number appears in the board pack, someone can explain where it came from and why it is right.
AI does not change that obligation. It changes the toolkit, and it changes the threat surface. The hard part is not the technology. The hard part is the discipline to apply the same standards of accountability, documentation, and human judgement to a new category of tools. That discipline is what separates a finance function that uses AI effectively from one that uses it recklessly.
I am working through these questions myself as AI tools become more embedded in FP&A and reporting workflows. If you are thinking about AI governance, model risk, or where to draw the line between automation and human judgement, I would genuinely like to hear how you are approaching it. Let’s connect.
Series Insight
Part of my series on CFO Leadership
The CA-to-CFO transition demands sharper judgement, broader influence, and a genuinely forward-looking mindset. This is where I write about what that shift actually requires.
View all articles in this series →Work through this with me
I run focused learning cohorts on FP&A frameworks, financial modelling, and the CA-to-CFO transition. Small groups, real problems, practical output.
Join the CohortExplore Related Categories