Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Salesforce Appoints Nick Christodoulou As Area VP Of Sales For Africa

2026-02-02

Why South Africa Cannot Afford To Wait For Healthcare Reform

2026-02-02

How is Technology Used in Cricket?

2026-02-02
Facebook X (Twitter) Instagram
Trending
  • Salesforce Appoints Nick Christodoulou As Area VP Of Sales For Africa
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Understanding AI Outputs: Study Shows Pro-Western Cultural Bias In The Way AI Decisions Are Explained
Opinion

Understanding AI Outputs: Study Shows Pro-Western Cultural Bias In The Way AI Decisions Are Explained

Mary CarmanBy Mary Carman2024-04-22No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
AI
AI
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Humans are increasingly using artificial intelligence (AI) to inform decisions about our lives. AI is, for instance, helping to make hiring choices and offer medical diagnoses.

If you were affected, you might want an explanation of why an AI system produced the decision it did. Yet AI systems are often so computationally complex that not even their designers fully know how the decisions were produced. That’s why the development of “explainable AI” (or XAI) is booming. Explainable AI includes systems that are either themselves simple enough to be fully understood by people, or that produce easily understandable explanations of other, more complex AI models’ outputs.

Explainable AI systems help AI engineers to monitor and correct their models’ processing. They also help users to make informed decisions about whether to trust or how best to use AI outputs.

Not all AI systems need to be explainable. But in high-stakes domains, we can expect XAI to become widespread. For instance, the recently adopted European AI Act, a forerunner for similar laws worldwide, protects a “right to explanation”. Citizens have a right to receive an explanation about an AI decision that affects their other rights.

But what if something like your cultural background affects what explanations you expect from an AI?

In a recent systematic review we analysed over 200 studies from the last ten years (2012–2022) in which the explanations given by XAI systems were tested on people. We wanted to see to what extent researchers indicated awareness of cultural variations that were potentially relevant for designing satisfactory explainable AI.

Our findings suggest that many existing systems may produce explanations that are primarily tailored to individualist, typically western, populations (for instance, people in the US or UK). Also, most XAI user studies only sampled western populations, but unwarranted generalisations of results to non-western populations were pervasive.

Cultural differences in explanations

There are two common ways to explain someone’s actions. One involves invoking the person’s beliefs and desires. This explanation is internalist, focused on what’s going on inside someone’s head. The other is externalist, citing factors like social norms, rules, or other factors that are outside the person.

To see the difference, think about how we might explain a driver’s stopping at a red traffic light. We could say, “They believe that the light is red and don’t want to violate any traffic rules, so they decided to stop.” This is an internalist explanation. But we could also say, “The lights are red and the traffic rules require that drivers stop at red lights, so the driver stopped.” This is an externalist explanation.

Many psychological studies suggest internalist explanations are preferred in “individualistic” countries where people often view themselves as more independent from others. These countries tend to be in the west, educated, industrialised, rich, and democratic.

However, such explanations are not obviously preferred over externalist explanations in “collectivist” societies, such as those commonly found across Africa or south Asia, where people often view themselves as interdependent.

Preferences in explaining behaviour are relevant for what a successful XAI output could be. An AI that offers a medical diagnosis might be accompanied by an explanation such as: “Since your symptoms are fever, sore throat and headache, the classifier thinks you have flu.” This is internalist because the explanation invokes an “internal” state of the AI – what it “thinks” – albeit metaphorically. Alternatively, the diagnosis could be accompanied by an explanation that does not mention an internal state, such as: “Since your symptoms are fever, sore throat and headache, based on its training on diagnostic inclusion criteria, the classifier produces the output that you have flu.” This is externalist. The explanation draws on “external” factors like inclusion criteria, similar to how we might explain stopping at a traffic light by appealing to the rules of the road.

If people from different cultures prefer different kinds of explanations, this matters for designing inclusive systems of explainable AI.

Our research, however, suggests that XAI developers are not sensitive to potential cultural differences in explanation preferences.

Overlooking cultural differences

A striking 93.7% of the studies we reviewed did not indicate awareness of cultural variations potentially relevant to designing explainable AI. Moreover, when we checked the cultural background of the people tested in the studies, we found 48.1% of the studies did not report on cultural background at all. This suggests that researchers did not consider cultural background to be a factor that could influence the generalisability of results.

Of those that did report on cultural background, 81.3% only sampled western, industrialised, educated, rich and democratic populations. A mere 8.4% sampled non-western populations and 10.3% sampled mixed populations.

Sampling only one kind of population need not be a problem if conclusions are limited to that population, or researchers give reasons to think other populations are similar. Yet, out of the studies that reported on cultural background, 70.1% extended their conclusions beyond the study population – to users, people, humans in general – and most studies did not contain evidence of reflection on cultural similarity.

To see how deep the oversight of culture runs in explainable AI research, we added a systematic “meta” review of 34 existing literature reviews of the field. Surprisingly, only two reviews commented on western-skewed sampling in user research, and only one review mentioned overgeneralisations of XAI study findings.

This is problematic.

Why the results matter

If findings about explainable AI systems only hold for one kind of population, these systems may not meet the explanatory requirements of other people affected by or using them. This can diminish trust in AI. When AI systems make high-stakes decisions but don’t give you a satisfactory explanation, you’ll likely distrust them even if their decisions (such as medical diagnoses) are accurate and important for you.

To address this cultural bias in XAI, developers and psychologists should collaborate to test for relevant cultural differences. We also recommend that cultural backgrounds of samples be reported with XAI user study findings.

Researchers should state whether their study sample represents a wider population. They may also use qualifiers like “US users” or “western participants” in reporting their findings.

As AI is being used worldwide to make important decisions, systems must provide explanations that people from different cultures find acceptable. As it stands, large populations who could benefit from the potential of explainable AI risk being overlooked in XAI research.The Conversation

Mary Carman, Senior Lecturer in Philosophy, University of the Witwatersrand and Uwe Peters, Assistant Professor of Philosophy, Utrecht University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI AI Bias ai ethics Collectivism Cultural bias Individualism
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Mary Carman

Related Posts

Why South Africa Cannot Afford To Wait For Healthcare Reform

2026-02-02

Stablecoins: The Quiet Revolution South Africa Can’t Ignore

2026-02-02

South Africa Could Unlock SME Growth By Exploiting AI’s Potential Through Corporate ESD Funds

2026-01-28

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

SA Auto Industry At Crossroads: Cheap Imports Threaten Future

Government must urgently finalise new energy vehicles policy, refine tariffs and deploy anti-dumping measures to…

Paarl Mall Gets R270M Mega Upgrad

2026-02-02

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

What’s Stopping Sunny South Africa’s Solar Industry?

2026-02-02

How a Major Hotel Group Is Electrifying South Africa’s Travel

2026-01-29

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Salesforce Appoints Nick Christodoulou As Area VP Of Sales For Africa

2026-02-02

Why South Africa Cannot Afford To Wait For Healthcare Reform

2026-02-02

How is Technology Used in Cricket?

2026-02-02
Recent Posts
  • Salesforce Appoints Nick Christodoulou As Area VP Of Sales For Africa
  • Why South Africa Cannot Afford To Wait For Healthcare Reform
  • How is Technology Used in Cricket?
  • SA Auto Industry At Crossroads: Cheap Imports Threaten Future
  • Stablecoins: The Quiet Revolution South Africa Can’t Ignore
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.