Trying to scale AI on outdated digital foundations is like playing Jenga on a wobbly table: early wins stack up quickly, but as ambition grows, every underlying weakness is magnified. South African organisations are reaching this point with AI. Adoption is accelerating, experimentation is widespread, and productivity gains are visible, yet many are trying to build ambitious AI capabilities on foundations that were never designed for the scale and complexity of the AI era.
While several factors shape AI success, from data and skills to governance and security, the network, often treated as background infrastructure, is emerging as one of the most critical enablers of sustainable AI success.
Momentum is real, but uneven
AI’s economic potential is well established. IDC research shows organisations achieving an average return of $3.7 for every $1 invested in generative AI, with leading adopters seeing returns exceeding $10.
South Africa is clearly part of this shift. PwC’s local modelling suggests AI could contribute 1.2 percentage points to national GDP over the next decade, even at today’s adoption levels.
Yet a gap is emerging between adoption and execution. While a KPMG survey shows that 71% of African CEOs are investing in AI, respondents also cite integrating AI into core operations as their top challenge. In fact, much of today’s adoption is bottom-up and tactical — teams experimenting with tools without a coordinated plan for scale, security or long-term sustainability.
That approach delivers quick wins, but it also creates risk. Without the right foundations, early productivity gains can plateau; technical debt accumulates and confidence in AI erodes, not because the technology fails, but because the environment cannot support it.
AI exposes weaknesses in the network

AI workloads behave very differently from traditional enterprise applications. Training models generate massive east-west traffic across data centres and cloud environments, while inference demands ultra-low latency and consistent performance to deliver real-time predictions and decisions.
At the extreme end of the spectrum, the fastest supercomputer in the world – hosted at the Lawrence Livermore National Laboratory – can perform quintillion calculations per second. This level of high-performance computing is only possible because the network can move vast volumes of data predictably, securely and at speed, underscoring just how intensive AI workloads are compared to conventional enterprise applications.
Traditional networks, designed for predictable north-south traffic, were not built for this scale or volatility. Today, networks must securely connect infrastructure, applications, users and data, while supporting compute-intensive workloads and increasingly complex hybrid environments. When networks fail to keep up, the consequences are tangible: congestion slows down models, compute is wasted, downtime increases, and the return on AI investment erodes.
South Africa’s constraints raise the stakes
These challenges are amplified locally. Organisations face persistent skills shortages, infrastructure constraints, and growing regulatory and compliance requirements. Network transformation is capital-intensive, and few can afford to replace legacy environments wholesale, forcing many to modernise in a phased, pragmatic approach.
As a result, organisations across sectors are beginning to rethink not just how they upgrade their networks, but how those networks are conceived in the first place. Rather than bolting AI onto legacy environments, frontrunners are moving toward AI-native systems, designed from the ground up with AI as a core component.
In practice, this means embedding intelligence directly into the network management layer. AI-native networks simplify operations, increase productivity, and deliver more reliable performance at scale by continuously analysing network behaviour and predicting issues before they impact users. Teams gain deeper visibility into performance across applications, infrastructure and third-party services, allowing them to quickly pinpoint the source of problems and resolve incidents in hours rather than days.
The result is exceptional user and operator experiences. Several of HPE’s hospitality partners, for example, are using AI-enabled networks to recognise returning guests as soon as they connect, personalise digital interactions in real time, and securely support high-density conference venues with multiple vendors moving on and off the network. At large-scale events such as the Nedbank Golf Challenge, AI-native networking has enabled thousands of attendees to connect seamlessly while receiving real-time, location-aware information on their devices, demonstrating how network design directly shapes experience and operational performance.
This shift also reflects a broader move toward modular network design, where capabilities operate as flexible, cloud-based components rather than a single tightly coupled system. The benefit is a network that can respond dynamically to changing demands while supporting more intelligent, automated operations and reducing reliance on manual, ticket-based processes. Crucially, modularity must be paired with interoperability and vendor-neutral standards that allow organisations to combine best-of-breed components without becoming locked into a single supplier.
In a skills-constrained market, this matters. By reducing manual configuration and troubleshooting, and enabling phased upgrades across hybrid environments, AI-native networking makes modernisation more achievable even in resource-constrained settings, lowering operational costs while easing pressure on scarce skills.
Combining AI-native networking with modular design, which lays the foundations for more goal-driven, agentic AI, allows organisations to simplify network management and maintain reliable performance, even as AI-driven applications place heavier and less predictable demands on the network.
Building networks with AI and for AI
The path forward is not disruption for its own sake, but deliberate, phased modernisation. AI-ready networks must be built both with AI and for AI.
AI-native networks can simplify deployment, automate troubleshooting, strengthen security, and reduce operational complexity. Built for AI, they deliver advanced, high-speed and low-latency architectures, reliable data movement and compliance by design, ensuring AI workloads can run efficiently and securely at scale.
Importantly, compliance and security can no longer be bolted on afterwards. As AI expands attack surfaces and regulatory scrutiny intensifies, networking and security must be architected together. When designed in tandem, compliance becomes easier to manage rather than harder to enforce.
South Africa’s AI moment is already underway. Whether it becomes a durable advantage or a fragile stack of early wins will depend on the strength of the foundations underneath. Without resilient, AI-native networking, AI initiatives stall at pilot stage, no matter how promising the use case. For business leaders, the challenge is clear: treat network modernisation as a strategic enabler of scalable AI, not a technical afterthought, or risk building AI ambition on foundations that cannot hold.
- Mandy Duncan, Country Manager of HPE Networking South Africa
