Internal Controls Framework

I have designed internal controls that survived their first audit without a single finding, and I have inherited control environments so fragile that a walkthrough collapsed within the first fifteen minutes. The difference was never the documentation. It was whether someone had designed the controls with a clear understanding of what could actually go wrong.

That distinction (designing controls versus documenting them) is the thread running through everything I cover here. The COSO framework gives you the architecture. ICFR gives you the regulatory mandate. But the controls that hold up under audit scrutiny are the ones built by people who understand both the standard and the operational reality beneath it.


COSO: The Architecture Behind Every Control Environment

The Committee of Sponsoring Organizations (COSO) Internal Control Integrated Framework is the global standard for how organisations think about internal controls. If you have worked in audit, you have encountered it. If you have worked in finance leadership, you have operated within it whether or not you called it by name.

The framework has five components.

Control Environment. This is the foundation. It covers the tone at the top, the organisation’s commitment to integrity and ethical values, the board’s oversight role, management’s operating style, and the competence expectations for people involved in financial reporting. I think of the control environment as the “will it actually work” layer. You can design a perfect set of controls on paper, but if the organisation’s culture does not support accountability and follow-through, those controls will fail in practice.

Risk Assessment. How the organisation identifies and evaluates risks that could prevent it from achieving its financial reporting objectives. This includes assessing the likelihood and impact of material misstatement across each significant account and disclosure. The risk assessment drives everything else because the controls you design should map directly to the risks you have identified.

Control Activities. These are the specific policies and procedures that address the risks identified in the assessment. Approvals, authorisations, reconciliations, segregation of duties, physical controls, and IT general controls all live here. Most people, when they think about “internal controls,” are thinking about this component.

Information and Communication. How relevant, high-quality information flows through the organisation to support the control environment. This includes the financial reporting systems, the chart of accounts, the data that feeds the controls, and the channels through which control failures get reported upward. I have seen organisations with strong control activities fail because the information flowing into those controls was incomplete or unreliable.

Monitoring Activities. How the organisation evaluates whether its controls are operating effectively over time. This includes ongoing monitoring (management review, automated exception reporting) and separate evaluations (internal audit assessments, external audit testing). Without monitoring, you are trusting that what you designed on day one is still working on day three hundred.

Within these five components, COSO defines 17 principles. Each principle maps to a specific aspect of the control framework, and an effective internal control system requires all 17 principles to be present and functioning. When I assess a control environment, I use these principles as the diagnostic lens because a gap in any one of them creates a vulnerability that audit testing will eventually find.


From COSO to ICFR: Where the Regulatory Mandate Sits

COSO is a framework. Internal Financial Controls over Financial Reporting (ICFR, also referenced as IFCoFR under Indian regulation) is a regulatory requirement that uses the COSO framework as its foundation. The distinction matters because COSO tells you how to think about controls while ICFR tells you what the law requires you to demonstrate.

Under SOX Section 404 in the United States, public companies must include a management assessment of ICFR effectiveness in their annual report, and the external auditor must attest to that assessment. The auditor is not just testing whether the controls exist. They are testing whether the controls operated effectively throughout the reporting period.

In India, Section 143(3)(i) of the Companies Act 2013 requires the statutory auditor to report on whether the company has adequate internal financial controls with reference to financial statements and whether those controls are operating effectively. This applies to all companies audited under the Companies Act, not just listed entities. The scope is broad.

The practical result is the same in both jurisdictions: the auditor is looking at your control environment through the COSO lens and testing whether your controls actually work. I covered the engagement letter implications of this reporting requirement in The Audit Engagement Letter: The Document That Defines Everything, where the IFCoFR reporting obligation under Section 143(3)(i) gets baked into the scope from day one.


The Controls That Get the Heaviest Audit Scrutiny

Not all controls receive equal attention. Auditors allocate their testing effort based on risk, and certain areas consistently attract the most scrutiny because they involve the highest risk of material misstatement.

Revenue recognition controls. Revenue is the most scrutinised line item in any audit, and for good reason. The risk of premature or fictitious revenue recognition ranks among the top fraud risks in every audit risk assessment I have ever seen. Auditors test the controls around when revenue gets recognised, how the five-step model under Ind AS 115 or ASC 606 gets applied to contracts, and how management’s judgements about performance obligations get validated. I wrote about the Ind AS 115 and IFRS 15 framework in The Free iPhone Illusion: Revenue Recognition under Ind AS 115 and IFRS 15, and the controls I describe here are what make that standard operationally enforceable.

When I design revenue recognition controls, I focus on three points. First, the contract review process that determines whether a contract meets the five criteria for revenue recognition. Second, the segregation between the team that negotiates deals and the team that determines the accounting treatment. Third, the management review control that validates the allocation of transaction price to performance obligations, especially when variable consideration or significant financing components are involved.

Journal entry controls. Journal entries are the most direct mechanism for manipulating financial statements, which is why auditing standards (SA 240, AS 2401) explicitly require auditors to test journal entries for fraud risk. The controls that matter here are the authorisation requirements (who can post journals, with what approval), the automated controls that flag unusual entries (entries posted outside business hours, entries posted by unusual users, entries to unusual account combinations), and the management review of significant manual journals.

I recommend building a journal entry risk matrix that categorises journals by risk level. Standard recurring entries (depreciation, amortisation, accruals) carry lower risk and can be tested on a sample basis. Manual top-side entries, late entries posted after the trial balance is finalised, and entries involving estimates or subjective judgements carry higher risk and should have individual review and approval by someone senior enough to understand the business context.

Management estimates. Estimates are where judgement lives in financial statements, and where the most significant audit debates happen. Allowance for doubtful debts, impairment assessments, fair value measurements, warranty provisions, useful life determinations, and any area where management exercises significant judgement in determining a number that appears in the financial statements. Auditors test the controls around how estimates get developed, what data feeds into them, how the assumptions get validated, and whether there is a retrospective review process that compares prior estimates to actual outcomes.

The control I find most effective for estimates is a formal assumptions register. Every significant estimate in the financial statements has a documented set of assumptions, a stated rationale for each assumption, and a comparison of the prior period’s estimate against the actual outcome. When the auditor asks “how did you arrive at this number?” you can hand them a document that shows the logic, the inputs, and the track record. That is the difference between an estimate that gets challenged and one that gets accepted.


Designing Controls vs. Documenting Controls

This is where I see the most confusion, and it is where control environments most often fail.

Documentation is not design. I have reviewed control environments where the documentation was immaculate (process flows, narratives, control matrices, RACI charts) but the controls themselves were poorly designed. The documentation described what should happen. It did not reflect what actually happens. And it did not address the specific risks that the controls were supposed to mitigate.

When I design a control, I start with the assertion. What could go wrong in this account balance or transaction class? Could revenue be overstated (existence/occurrence)? Could expenses be recorded in the wrong period (cutoff)? Could an asset be impaired but not written down (valuation)? The assertion drives the control objective, and the control objective drives the control design.

A well-designed control has five characteristics. It addresses a specific risk. It operates at a frequency that matches the risk (a daily reconciliation for a high-volume account, a monthly review for a lower-risk balance). It has a clearly defined operator (a named role, not “the team”). It produces evidence of operation (a sign-off, an exception report, a documented review). And it has defined criteria for what constitutes an exception and how exceptions get investigated and resolved.

Documentation comes after design. It captures what was designed and why. The narrative should explain the risk the control addresses, the control objective, the control activity, the frequency, the operator, the evidence, and the exception handling process. When I build control documentation, I write it so that someone who has never seen the process can understand not just what the control does but why it exists. That matters because auditor turnover is real, and the person testing your control this year may not be the person who tested it last year.


Walkthroughs: Where Controls Meet Reality

A walkthrough is the audit procedure where the auditor traces a single transaction through the entire process from initiation to recording in the financial statements. It tests whether the process described in the documentation actually matches what happens in practice. I have sat on both sides of walkthroughs, and the experience of leading the walkthrough from the design side has fundamentally shaped how I build controls.

The walkthrough will expose every gap between documentation and reality. If the control narrative says “the finance manager reviews and approves all journal entries above 5 lakhs” but the finance manager approves them in batch without reviewing the supporting detail, the walkthrough will surface that. If the documentation says “the system performs a three-way match for all purchase orders” but the system allows exceptions that bypass the match without documented approval, the walkthrough will find it.

I prepare for walkthroughs by running my own before the auditors arrive. I pick a sample transaction and trace it through the entire process, looking for three things. First, does the actual process match the documented process? Second, does the evidence of control operation exist for each step? Third, can I explain why each control exists and what risk it mitigates? If I cannot answer all three, I know the auditor will find the same gaps, and I would rather fix them before the walkthrough than explain them during it.

The walkthrough is also a design tool, not just a testing tool. Every walkthrough I conduct surfaces small process changes that have drifted from the original design. Staff turnover, system upgrades, changes in transaction volume, and all of these create drift between the control as designed and the control as operated. Running walkthroughs periodically (not just when the auditors are coming) keeps the control environment honest.


Testing Approaches: What Auditors Look For

Auditors use three main approaches to test controls, and understanding them helps you design controls that hold up under each.

Inquiry and observation. The auditor asks you to describe how the control works and then observes it in operation. This is the weakest form of evidence on its own, which is why auditors rarely rely on it alone. But it is where the audit starts, and a control operator who cannot clearly explain what they do, why they do it, and what they look for when reviewing is a red flag before testing even begins.

Inspection of evidence. The auditor examines the documented evidence that the control operated. Sign-offs on reconciliations, approval workflows on journal entries, exception reports with documented resolution, and management review notes on significant estimates. This is where the “evidence of operation” element of control design pays off. If the control produces clear, retrievable evidence every time it operates, the auditor can test it efficiently. If the evidence is inconsistent, incomplete, or requires explanation, the testing takes longer and the risk of a finding increases.

Reperformance. The auditor independently performs the control procedure to determine whether they reach the same conclusion as the control operator. If the reconciliation control says “all reconciling items above 1 lakh are investigated,” the auditor will pick a sample of reconciling items above 1 lakh and verify that they were investigated, that the investigation was appropriate, and that the resolution was valid. This is the most rigorous test, and it is applied to the highest-risk controls.

The practical takeaway: design your controls to produce evidence that survives reperformance. If the auditor can independently verify that the control reached the correct conclusion based on the evidence available, the control passes. If the auditor cannot reproduce the logic because the evidence is incomplete or the criteria are vague, the control is at risk.


Common Control Deficiencies and How I Address Them

After designing and reviewing control environments across multiple engagements, I see the same deficiencies repeatedly. Here are the ones that generate the most audit findings.

Inadequate segregation of duties. The same person who initiates a transaction also approves it, or the person who maintains the master data also processes payments. In smaller organisations where headcount does not allow full segregation, I design compensating controls: independent management reviews at a frequency that matches the risk, system-generated exception reports reviewed by someone outside the process, and periodic rotation of responsibilities.

Imprecise management review controls. “The CFO reviews the monthly financials” is not a control. It is a description of something that happens. A control specifies what the reviewer looks for (variances above a defined threshold, trend breaks, account balance movements beyond a defined range), what evidence the review produces (documented comments, sign-off with specific observations), and what happens when the review identifies an exception. I have seen management review controls fail testing not because the review did not happen but because the evidence did not demonstrate that the reviewer actually evaluated the information rather than simply approving it.

Incomplete IT general controls. Application controls depend on the IT environment they operate in. If the system that enforces the three-way match on purchase orders does not have adequate access controls, change management procedures, and data integrity safeguards, the application control is only as strong as the weakest link in the IT infrastructure. I work with IT teams to ensure that the ITGC environment (access management, change management, computer operations, program development) supports the application controls that the business relies on.

Lack of retrospective review for estimates. Estimates are inherently uncertain, but a control environment that never compares prior estimates to actual outcomes is missing a fundamental feedback loop. I build retrospective reviews into every estimate control. At each reporting period, we compare the prior period’s estimate to the actual result, document the variance, and assess whether the estimation methodology needs to change. This creates accountability, improves accuracy over time, and gives the auditor confidence that management’s estimates are grounded in observable data rather than optimism.


Designing Controls That Scale

The best control environments I have built share a common trait: they were designed to accommodate the business the organisation was becoming, not just the business it was at the time. A control that works at a hundred transactions per month will break at ten thousand. A reconciliation process that one person can manage becomes a bottleneck when the volume doubles. A manual review that catches exceptions at low volume becomes a rubber stamp at high volume because the reviewer cannot give adequate attention to every item.

When I design controls, I think about the failure point. At what volume, what complexity, or what pace of change does this control stop working? I build in triggers (volume thresholds, exception rates, cycle time metrics) that signal when a control needs to be redesigned. I wrote about how the same scaling principle applies to finance processes more broadly in Audit as a Leadership Tool, where I discussed how audit findings can signal when a business has outgrown its processes.

The organisations that maintain clean ICFR reports year after year are not the ones with the most documentation. They are the ones where someone understood the operational risks, designed controls that address those risks precisely, and built the monitoring mechanisms to detect when those controls need to evolve. That is design work. It requires understanding the business at the transaction level and the control framework at the architectural level, and building the bridge between them.


If you are designing or redesigning your internal control environment, or if you are preparing for your first ICFR assessment under SOX or the Companies Act, I would be glad to compare approaches. The frameworks are well documented but the implementation decisions are where the real work lives. Let’s connect.

Series Insight

Part of my series on Audit & Governance

Financial statement audit, internal controls, and governance written from the inside. The audit foundation that sharpens FP&A assumptions, controls design, and judgement under uncertainty.

View all articles in this series →

Work through this with me

I run focused learning cohorts on FP&A frameworks, financial modelling, and the CA-to-CFO transition. Small groups, real problems, practical output.

Join the Cohort