HERMANUS, Western Cape – In a provocative and data-driven keynote that served as a critical reality check for the global tech community, Idit Duvdevany Aronsohn, Head of ESG & People Relations at global software giant Amdocs, delivered a key message at SATNAC 2025: the future of artificial intelligence is being authored today, and Africa must be a primary writer, not a footnote.
Aronsohn based her argument around a single, powerful question: “Can AI be inclusive without Africa or without African data?”
To a room of continent’s leading technologists, innovators and policymakers, her answer was unequivocal. “Not like my AI friend that said absolutely yes, I want to say here absolutely no.”
Moving beyond theoretical ethics, Aronsohn grounded the discussion in immediate, high-stakes realities.
“AI has already decided what’s going to be the future of this room, of our industry, in not so long from now, and actually in some places it already happens, that AI decides who gets hired, who gets promoted, and also who will stay behind,” she stated, acknowledging the provocative nature of the claim but urging the audience to engage with its fundamental truth.
She dismantled the notion of a neutral machine, explaining, “AI by itself doesn’t make mistakes. It also doesn’t drive an agenda. For now. AI simply follows rules that were made by people… and it follows data.”
This, she argued, is the core of the crisis and the opportunity. If AI is trained on historical data that encodes centuries of bias and exclusion, it will systematically perpetuate and amplify those inequalities on a global scale.
“If the data that AI is trained on is data from the past, and the past was unfair, the past was biased, the past knows that women don’t make it to the top… then when AI perpetuates the past, the question is, how can we build a future without us?” Aronsohn asked, linking the technical to the profoundly human.
To illustrate the mechanics of exclusion, she introduced a framework of three key personas in the AI ecosystem: the Creators (the coders and engineers), the Influencers (leaders and promoters), and the Consumers (the end-users whose interactions train the models). She presented damning data showing a severe underrepresentation of women, African voices, and African languages across all three groups.
“Houston, we have a problem,” Aronsohn declared. “The problem is that not enough women are adopting, not enough African languages are incorporated or embedded. We don’t have enough Africa-based talent influencing and creating AI solutions.”
She emphasised that this isn’t just a “diversity” issue in the traditional sense, but a catastrophic flaw in system design. Using the now-famous example of early smartphone cameras failing to account for left-handed users, she made the technical deeply personal.
“If we didn’t have right-handed people and left-handed people creating the first phone camera, this is what happens. You get some of the pictures being upside down… This is what happens to us.”
The consequence of this exclusion is a self-reinforcing cycle.
Without diverse creators, biased systems are built. These systems then fail to serve excluded communities, who in turn adopt the technology more slowly, further skewing the data and cementing their absence from the digital future.
Aronsohn highlighted the specific gender adoption gap, noting women globally use AI significantly less than men, a gap that risks leaving half the population behind in the era of its greatest economic transformation.
However, Aronsohn’s message was not one of despair, but a urgent blueprint for action. She redefine the challenge: “Instead of diversity being the AI-created problem, as it could be… maybe it could be actually the path to solution.”
She called for a conscious, industry-wide effort to move the entire ecosystem from a “roller coaster” of negative emotions around AI – anxiety, fear, denial – toward hope and proactive adoption. This requires intentional initiatives at every level:
-
For Creators & Influencers: Building “ethical machines and responsible machines” from the ground up in Africa, with local context, languages, and challenges as the primary design input. She urged the room to eliminate biases in recruiting algorithms and job descriptions.
-
For Consumers & Adoption: Launching massive, collaborative re-skilling and community programs to ensure all Africans can engage with AI not as passive users, but as informed shapers. “The telecoms… this ecosystem, is not just carrier of data, right? They’re gatekeepers for digital equality.”
-
For the Next Generation: Aronsohn passionately advocated for the “Yes We Gen” toolkit, an Amdocs-led initiative to attract middle and high school girls into AI and coding, offered freely in 12 languages. She called on every leader in the room to deploy it. “This is one of the ways to do it… making sure that we’re driving the young forces into adopting AI, changing AI.”
Her conclusion returned to the core theme of Africa’s unique position.
“AI is a mirror. It’s a reflection of who we are. We can’t blame AI for anything. We just need to make sure that we’re doing it right in the first place.”
She turned the narrative of “catching up” on its head with a powerful vision of leapfrogging. “Africa doesn’t need to catch up. It can leap ahead. If inclusion is designed from day one.”
Quoting Telkom CEO Serame Taukobong’s earlier address at the conference, she left the audience with a charge that fused inspiration with responsibility: “The future is being built where people refuse to stay behind… The future of diversity is not written yet. But one thing is certain, AI is not the author. We are.”
The call was clear: The time for passive observation is over.
The architecture of the AI future is under construction, and Africa’s data, talent, and ethical perspective are not optional components, they are the essential infrastructure.
The continent’s ascent depends on its ability to author this chapter, ensuring technology amplifies human dignity rather than calcifying historical inequity.

