Artificial Intelligence is no longer confined to the back office. From chatbots handling complex queries to robo-advisers producing portfolio summaries, AI increasingly functions as a primary channel for communicating financial information. While this shift delivers efficiency and scale, it dilutes the “human touch” in explaining risk, a human advisor can gauge a client’s hesitation and offer nuanced guidance that an AI interface, optimised for speed and clarity, often omits.
In the South African context, this creates a paradox. AI-driven disclosures promise to democratise access to financial information, yet they introduce opaque risks that threaten fairness, consumer protection and systemic stability. As we integrate these technologies, innovation must not come at the expense of customer protection.
The regulatory baseline: POPIA, TCF and governance
South Africa’s regulatory framework provides a robust foundation for managing AI risks, even though it was not designed with machine learning in mind.
The Protection of Personal Information Act (POPIA) applies directly. Financial AI models rely on vast datasets, credit histories, demographic and behavioural data, and processing must remain lawful, transparent and consistent with the original purpose of collection. Critically, Section 71 grants customers the right to challenge decisions made solely through automated processes where those decisions carry legal consequences. As automated credit scoring and underwriting become standard, institutions must ensure a clear pathway for customers to request human review.
AI can enhance Treating Customers Fairly (TCF) outcomes by ensuring consistent application of affordability checks. However, if a model is trained on historically biased data, it may produce discriminatory outcomes, violating TCF’s fair treatment principle. The “black box” nature of deep learning further complicates Outcome 3 (clear information) and Outcome 4 (suitable advice), if institutions cannot explain how an outcome was reached, meaningful disclosure becomes difficult.
King V on Corporate Governance (October 2025) reinforces these obligations: Principle 10 makes clear that boards must engage with the ethical, legal and strategic consequences of automated decision-making. AI is not merely an IT issue.
Fairness, transparency and protection
AI models trained on historical South African data risk reproducing entrenched socio-economic inequalities. Even where protected characteristics such as race are excluded, proxy variables, postal codes, education levels, employment patterns, may yield functionally similar discriminatory results, restricting access to credit or insurance based on systemic factors rather than individual merit.
Transparency must be meaningfully calibrated. Disclosure must go beyond a simple disclaimer: consumers deserve clear explanations of how AI influences outcomes that affect them, alongside information on their right to redress. For regulators, the focus shifts to governance and interpretability, evidence that an entity understands the model’s logic and the safeguards in place.
Generative AI introduces the additional risk of “hallucinations”, plausible but factually incorrect outputs. An AI system optimised for lead conversion may inadvertently nudge customers toward high-risk products by downplaying risk warnings. Output filters must prohibit AI from truncating mandatory risk disclosures.
Keeping the financial system stable
Beyond individual interactions, AI impacts broader systemic stability. It helps regulators scan vast datasets instantly to detect fraud or insolvency, acting as a faster early-warning system than human analysis alone. It can also translate complex financial jargon into accessible language, reducing default rates by improving consumer understanding.
However, over-reliance on a small number of large language models (LLMs) creates concentration risk: multiple institutions may interpret market signals identically and respond simultaneously, exacerbating volatility or triggering flash crashes. An AI-generated error in a major public disclosure can propagate instantly, triggering automated trading responses before humans can correct the record. A single bug in a widely-used credit assessment model could simultaneously affect millions of customers across different banks.
Considerations for South African financial institutions
As institutions move from AI experimentation to full-scale deployment, governance frameworks must evolve. Human-in-the-Loop (HITL) protocols should include:
- Mandatory review: Automated disclosures related to binding contracts or high-impact decisions should trigger a mandatory human review.
- Kill switch: Operational teams should be empowered to suspend AI tools immediately when a pattern of hallucinations is detected.
- Fairness audits: Regular testing using synthetic personas reflecting South Africa’s diversity, languages, age, education and income levels, to ensure consistent clarity and tone.
- Outcome monitoring: Track performance metrics across all demographics, not only complaint rates, given digital literacy gaps that may prevent certain groups from reporting issues.
- Accountability: The “licence holder” principle: outsourcing technology does not outsource liability.
- Service level agreements: Vendor contracts should include explicit clauses on model explainability and liability for hallucinations causing financial harm.
AI is a tool, not a human. In South Africa, where financial inclusion and customer protection are paramount, AI must clarify the financial landscape, not cloud it. By grounding deployment in POPIA, TCF and King V principles, and embedding strong governance and human oversight, financial institutions can harness AI’s potential without compromising fairness or stability. Properly used, AI does not replace the human role, it elevates it, allowing professionals to focus on the judgment, context and accountability that machines cannot replicate.
A governance framework for responsible financial AI deployment
- Nolwazi Hlophe | Senior Specialist: FinTech | FSCA | Dr Johann van der Lith | Senior Specialist: Regulatory Frameworks | FSCA
* The Financial Sector Conduct Authority (FSCA) regulates and supervises the market conduct of financial institutions in South Africa. Visit www.fsca.co.za.
