Picture a rainy Wednesday morning in Sandton, sometime in the near future. A claims handler opens their laptop, and right away, things move much faster than they used to. For every new email, the company’s artificial intelligence system (AI) has already drafted a suggested reply. The inbox is lighter too, because a public chatbot, which has been trained on policy wordings and is constantly improving, now handles most client and broker queries. Need a meeting? An AI assistant automatically schedules it, sets reminders, and even takes the minutes.
The claims themselves look a bit different these days. One email involves a policyholder who collided with a driverless taxi. The chatbot has already gathered the details, and an AI screener has cleared it of fraud, leaving the handler just waiting on the vehicle’s computer logs. Another email might be about a medical aid approving cover based purely on an AI diagnosis, with a pharmaceutical chatbot standing by to answer any medication queries instantly.
Then there’s the professional indemnity notification: a reputable financial advisor is reporting that an unknown party used a deepfake video of his likeness to give bogus investment advice. Followers acted on it and lost money, and he is notifying his insurer out of caution while his attorneys assess the fallout.
None of this is science fiction; these processes are active and becoming common in South Africa. But as we all know, progress brings risk. The rapid adoption of AI has led to a growing list of incidents across industries. For example, Stanford University’s 2025 AI Index reported that AI-related “incidents” reported worldwide in 2024 jumped by 56.4% from the previous year.
It’s safe to say that any organisation using AI faces potential exposure. In March 2026, a California jury found media platforms Meta and YouTube liable for $3 million in a damages claim relating to their algorithms. We’ve also seen Tesla held liable for a fatal vehicle accident involving its autopilot system and Air Canada forced by a tribunal to honour a discount mistakenly promised by its chatbot. In the United Kingdom, an AI facial recognition system misidentified a woman as a shoplifter, leading to a baseless search and emotional trauma. Closer to home, the Financial Sector Conduct Authority (FSCA) has raised alarms about deepfake videos of prominent figures endorsing fraudulent schemes, a trend that has already been linked to the final liquidation of at least one financial service provider. Meanwhile, generative AI developers are facing massive intellectual property lawsuits over their training datasets.
Globally, lawmakers are scrambling to catch up. The European Union has adopted its comprehensive AI Act, and Denmark is looking at copyright protection for individual likenesses against deepfakes. Here in South Africa, a draft National AI Policy Framework was published in 2024 and is expected to be gazetted for a formal 60-day public consultation process soon, but it’s finalisation is only anticipated during the 2026/2027 financial year.
The regulatory shift in financial services

In the absence of clear legislative or policy guidelines, South African regulators and industry bodies are stepping up to set the rules of the game, particularly in the insurance space. In November 2025, the FSCA and the Prudential Authority (PA) jointly published a landmark, first-of-its-kind report titled “Artificial Intelligence in the South African Financial Sector”.
This joint report provides a clear picture of where the industry stands: while banks are leading AI adoption at 52%, the insurance sector has adopted a markedly more cautious stance, adopting AI at just 8%. However, insurers plan to expand their use of AI heavily into underwriting and claims management. To manage this effectively, the FSCA and PA are urging financial institutions to adopt robust governance frameworks, ensure board-level oversight, and use recognised “explainability methods” so that AI-driven decisions are transparent and auditable. They also specifically mandate that institutions must clearly disclose when AI influences consumer-impacting decisions, such as credit assessments or insurance pricing.
Furthermore, AI is radically changing the fraud landscape. Criminals are now using AI to create “synthetic identities” – combining stolen real IDs with fake names and AI-generated images to bypass an insurer’s onboarding verification. In response to these sophisticated threats, major industry bodies like the Association for Savings and Investment South Africa (ASISA) and the South African Insurance Association (SAIA) are taking a collaborative approach. ASISA and SAIA have jointly established a Computer Security Incident Response Team to monitor emerging cyber threats, report on attack methods, and share intelligence across the sector.
Outside of the direct insurance space, we are also seeing bodies like the Independent Regulatory Board for Auditors, the South African Institute of Chartered Accountants, and the Association of Arbitrators issuing crucial guidance on using AI responsibly. These guidelines will likely inform how courts apply the classic Kruger v Coetzee test for negligence, ie asking whether a reasonable professional would have foreseen the harm and taken steps to prevent it. Ultimately, any data processing or privacy issues tied to AI systems will also require strict alignment with the Information Regulator under the Protection of Personal Information Act (POPIA). In this context, the Information Regulator has raised the concern that the number of data breaches occurring in South Africa has risen dramatically, with a 40% increase in the number of security compromise incidents in 2025 as compared with the prior year.
The insurance response: Silent vs. affirmative cover

Considering these profound changes in the way businesses are operating, the risk landscape has fundamentally shifted, making adequate insurance coverage essential. Right now, most policies cover AI risks through “silent cover,” meaning AI isn’t explicitly mentioned, but the risks fall under the general policy wording. By contrast, “affirmative cover” would expressly target AI risks. As AI claims inevitably rise and coverage disputes develop, we can expect the insurance industry to move steadily toward clear, affirmative AI policies.
For businesses adopting AI, it is vital to prioritise comprehensive AI coverage, as well as ensure robust governance frameworks, POPIA compliance, and monitoring of regulatory developments.
- Kim Rew, Partner & Jered Shorkend, Associate at Webber Wentzel
