Intelligence
The compliance constraint: how regulated markets are approaching AI-powered debt recovery.
AI in collections is not inherently higher-risk than human-led collections. In several important respects, it is the opposite.
19 February 2026
The two words that stop most conversations about AI in collections before they start: regulatory compliance. Fair enough. The history of debt collection is littered with enforcement actions, reputational damage, and companies that moved fast and then faced the consequences. Regulators are watching this space carefully, and digital lenders operate in environments where a single compliance failure can be existential.
But there is a version of this conversation that gets the direction of causality exactly backwards. The assumption that AI-powered collections is inherently higher-risk than human-led collections is not supported by the evidence. In several important respects, it is the opposite.
What regulators actually care about
Across markets — FCA in the UK, CFPB in the United States, CBN in Nigeria, RBI in India, OJK in Indonesia — the regulatory frameworks for collections share a common logic, even where the specific rules differ.
Fairness. Was the borrower treated with dignity? Were they given accurate information about their debt? Were customers in financial difficulty treated with appropriate consideration?
Disclosure. Was the identity of the collecting party clearly stated? Was the nature of the debt accurately described?
Timing and frequency. Were contact attempts made within permitted hours? Was the borrower contacted an unreasonable number of times?
Audit and accountability. Can you demonstrate, to a regulator, exactly what happened in every interaction? Can you produce records? Can you prove that your process was compliant?
These are the questions regulators ask. They are also, it turns out, the questions that AI-powered collections handles better than human-led operations.
The audit problem with human collections
Human agents are the weakest link in any collections compliance programme, not because they are dishonest, but because they are human. Training is inconsistent. Fatigue affects performance. Scripts are deviated from. Recording is incomplete. Memory is unreliable.
When a regulator asks what was said in a specific call on a specific date, the answer — in most human-led operations — is: somewhere in a call recording, which you will need to find, assuming it was recorded, assuming the recording is complete, assuming the agent followed protocol that day.
This is not hypothetical. Enforcement actions across every major regulated market have cited inadequate record-keeping, inconsistent agent behaviour, and the inability to demonstrate compliance at the individual interaction level.
An AI agent cannot have a bad day. It cannot forget to identify itself. It cannot call at 11pm. Every deviation from permitted behaviour is immediately flagged.
How different markets are approaching it
The FCA's Consumer Credit sourcebook (CONC) applies to AI agents exactly as it applies to human agents. The medium is not the issue — the substance is. Fair treatment, accurate disclosure, appropriate handling of customers in financial difficulty. These are achievable, and in some respects more reliably achievable, through AI.
The CFPB's approach under the Fair Debt Collection Practices Act focuses on timing, frequency, and content of communications. AI systems can enforce these constraints at the infrastructure level — no calls before 8am, no calls after 9pm, no contact after a cease-and-desist is registered. Rules that require training and enforcement with human agents become configuration parameters.
Markets like Nigeria, India, and Indonesia are developing their frameworks in real time. The lenders operating there who are building clean audit trails now — structured interaction logs, timestamped records, searchable documentation — are building the compliance infrastructure that those markets will eventually require. They are ahead of the regulatory curve, not behind it.
The counter-intuitive truth
When every interaction is structured, every variable is tracked, and every response is logged, the question “what happened in that conversation on the 14th?” has a definitive answer. Not a recording to search for, not an agent's recollection. A complete, timestamped, structured record.
This is what genuine compliance infrastructure looks like. Not a policy document. Not a training programme. A system where compliance is architectural — built into how the thing works, not bolted on as an afterthought.
Compliance as competitive advantage
The lenders who are treating compliance as a constraint — a cost of doing business, a burden to manage — are building fragile operations. One enforcement action can redefine how their business is perceived and regulated.
The lenders who are treating compliance as infrastructure are building something different. They can operate in any regulated market with confidence. They can demonstrate, at any point, exactly how they operate. They can move into new markets without rebuilding their compliance stack from scratch.
Compliance is not the constraint on AI-powered collections. For the lenders who build it correctly, it is the moat.
If this is the problem you are carrying, we should talk.