The routing decision hiding inside every KYC process
Every financial institution that onboards customers makes the same set of decisions hundreds or thousands of times a day. Is this customer low-risk, medium-risk, or high-risk? Which verification level applies? What documents are required? Does this case need enhanced due diligence, or can it go through simplified checks? Should it be routed to an analyst, or can it be approved automatically?
These are routing decisions. And they are governed by rules: FATF's risk-based approach, national AML/CFT regulations, internal risk appetite frameworks, and jurisdiction-specific requirements that can vary from country to country and sometimes from product to product.
FATF's Recommendations (adopted February 2012, updated regularly) require assessing ML/TF risk and applying a risk-based approach (Recommendation 1). In February 2025, FATF updated Recommendation 1 and its Interpretive Note, emphasising proportionality and simplified measures under the risk-based approach. The principle is straightforward: low-risk customers go through simplified due diligence, high-risk customers go through enhanced due diligence, and the institution is responsible for defining and justifying the tiers in between.
In practice, this means every KYC process is a decision tree. The inputs are customer data, jurisdiction, product type, transaction patterns, and screening results. The outputs are routing decisions: which verification path, which level of scrutiny, which documents, which approval flow. The logic connecting inputs to outputs is a set of rules.
Where the rules actually live
In most financial institutions, these rules don't live in one place. They're spread across multiple systems, codebases, and sometimes spreadsheets.
The risk scoring model might be in one system. The document requirements might be configured in another. The jurisdiction-specific overrides might be hardcoded into the onboarding platform. The escalation thresholds might exist in a policy document that a compliance officer references manually. The PEP (Politically Exposed Person) screening rules might follow one logic in the initial onboarding and a different logic during periodic reviews.
Fenergo's Financial Crime Industry Trends 2025 research (survey conducted in August 2025) reports average annual spend on AML/KYC operations of $72.9 million per firm, with country-level breakdowns for the UK ($78.4M), US ($72.2M), and Singapore ($68.2M). Reported use of advanced AI tools in KYC/AML (as self-reported by survey respondents) surged from 42% in 2024 to 82% in 2025. Automation of periodic KYC reviews averaged roughly a third across respondents. The gap between AI adoption and actual process automation suggests that many institutions are adding AI capabilities on top of fragmented rule systems rather than addressing the rule layer itself.
Fenergo's KYC in 2022 survey (1,055 C-suite respondents) reports that two thirds of respondents said a single KYC review costs between $1,501 and $3,500. In Fenergo's Financial Crime Industry Trends 2025 research, UK corporate banks report onboarding times averaging more than six weeks. Signicat/11:FS research reports that 63% of consumers in Europe have abandoned a financial application in the past year, citing lengthy processes and too much information required as key reasons.
These numbers aren't a technology problem. They're a rules problem. The rules are scattered, undocumented, inconsistent across channels, and expensive to change.
Why adding AI to broken rules doesn't help
The current industry conversation focuses heavily on applying AI to KYC: AI-powered document verification, AI-driven risk scoring, AI-based transaction monitoring. And AI is genuinely useful for specific tasks within KYC. Extracting data from identity documents. Matching faces to photos. Detecting anomalies in transaction patterns. Screening names against sanctions lists with fuzzy matching.
But these are extraction and classification tasks. They take unstructured or semi-structured input and produce structured output: a verified name, a risk score, a match/no-match result, a list of extracted fields from a passport.
The routing decision that follows is different. Given this risk score, this jurisdiction, this product type, and these screening results, which verification path does this customer take? That's not a question for a language model or a classification algorithm. That's a question for a decision table: a set of explicit, versioned, testable rules that map inputs to routing outcomes.
When the routing logic lives inside an AI model, or is scattered across multiple systems with no central definition, several things break.
Auditors ask "why was this customer routed to simplified due diligence?" and the answer requires a developer to trace code paths across multiple services. Regulators expect the institution to demonstrate that its risk-based approach is consistently applied. In January 2026, Fenergo reported that penalties for AML/KYC, sanctions, and CDD failures totalled $3.8 billion globally in 2025, with enforcement activity shifting toward EMEA and APAC. The consequences of not being able to explain a routing decision are significant — aggregate enforcement actions in this domain are routinely measured in the billions.
Rule changes are slow. When a jurisdiction updates its requirements, or when the institution adjusts its risk appetite, the change needs to propagate through every system that implements part of the routing logic. If the routing rules are distributed across code, configuration, and documentation, a single policy change can take weeks to implement and validate.
Testing is incomplete. Most institutions test their KYC technology (does the document scanner work? does the name screening return results?) but don't systematically test their routing logic (given this specific combination of risk factors, does the system route to the correct verification path?). The technology works; the rules are untested.
What KYC routing as a managed rule set looks like
The alternative is treating KYC routing decisions as what they are: a set of explicit business rules that can be authored, versioned, tested, and audited independently of the technology that feeds them data.
A decision table for KYC routing might look like this in practice:
Inputs: customer type (individual/corporate), jurisdiction risk rating, product risk rating, PEP status, sanctions screening result, source of funds clarity.
Outputs: verification tier (simplified/standard/enhanced), required documents, approval path (auto/analyst/senior analyst), periodic review frequency.
Each row in the table is a rule. Each rule has a version, an author, a timestamp, and a test case. When the rule changes, the change is tracked. When a customer is routed, the system records which rule version was applied.
This separation has practical consequences. When a regulator asks why a particular customer was routed to simplified due diligence, the answer is specific: "Rule 47, version 3.2, authored by compliance officer on date, matched conditions A, B, C, and produced routing outcome X. Here is the test suite that validates this rule. Here is the diff from the previous version."
When a jurisdiction changes its requirements, the compliance team updates the relevant rules in the decision table. The change is tested against existing cases. It deploys. The extraction layer (document verification, name screening, risk scoring) doesn't change because it doesn't contain routing logic.
When the institution changes its document verification vendor, or upgrades its AI-powered screening tool, the routing rules don't change because they operate on structured outputs (risk scores, match results, extracted fields), not on the technology that produced those outputs.
The extraction-routing boundary
This maps directly to the Extraction Pattern described in the context of AI agent architectures: the AI handles unstructured input (reading documents, matching faces, screening names) and produces structured output. The rules handle routing decisions based on that structured output.
The boundary between them is a typed contract. The AI components produce fields with defined types and valid values (risk_score: integer 1-100, pep_status: boolean, jurisdiction_risk: low/medium/high). The routing rules consume those fields and produce routing outcomes. Each side can be tested, updated, and replaced independently.
FATF's risk-based approach is consistent with this separation, even if it doesn't describe it in these terms. The expectation is that institutions can demonstrate their risk assessment methodology, show how it maps to due diligence measures, and prove that it is consistently applied. That's a description of a testable, auditable rule set, not an AI model.
The question to ask your compliance team
How many distinct routing rules does your KYC process actually contain? Not approximately, not "it depends." How many explicit conditions map to how many distinct verification paths? Can you list them? Can you version them? Can you test them?
If the answer is "we'd need to check with engineering," then your KYC routing logic is technical debt that happens to live in a regulated environment. And unlike most technical debt, this one sits in a domain where aggregate enforcement actions are routinely measured in the billions.