Zane Ulhaq, Head of MENA at Endava, has penned an op-ed, in which he makes the case that sovereign AI is not going to be some sort of fringe concern in certain regions, but instead will serve as one of the key topics on the agenda when it comes to future trade agreements between global economic superpowers.

The AI conversation today is far more nuanced than it was even eighteen months ago. Early adoption was driven by competitive pressure and a fear of missing out.
Organisations experimented fast, often without a clear understanding of risk, governance or long-term value. Recently, this was apparent when open-source, self-hosted AI experiments have shown how quickly tools can move beyond their original intent, creating unintended security and trust risks as they spread.
Much of this still comes down to how people use the technology rather than true autonomy, but the signal is clear: adoption is now outpacing institutional oversight.
Sovereign AI does not aim to stop this process, but to reduce systemic risk by setting clear boundaries on where and how AI can be deployed at scale, particularly within regulated environments and critical systems. With national security and economic resilience in the mix, AI is no longer just a commercial priority; it’s increasingly a government-backed agenda.
Geopolitics has played a decisive role in this shift. Rising tensions between global powers, concerns over data sovereignty, and a growing distrust of offshore technology dependence have pushed governments to treat AI not merely as a solution, but as strategic infrastructure.
This has been particularly visible in the Middle East, where AI is being positioned as a cornerstone of national development rather than a bolt-on innovation.
Throughout 2025 and prior to this, the early roots of this transition became visible worldwide. Governments began setting clearer rules around data residency, model training, inference, and ownership, and cross-border data flows.
In the Gulf, these policies have been matched with capital. The UAE and Saudi Arabia have committed billions to AI-ready data centres, high-performance compute, and national AI strategies designed to ensure that data, models and value creation remain within their borders.
This combination of regulation and investment has laid the foundations for what will define 2026: the rise of sovereign AI.
Sovereign AI goes beyond hosting models locally. It reflects a deliberate choice to control how AI systems are trained, governed and deployed within a national context.
In practice, this means models that reflect local languages, values, legal frameworks and economic priorities, while operating on infrastructure subject to domestic law.
For governments, it offers strategic autonomy. For enterprises, the rise of sovereign AI promises clarity and stability in an increasingly fragmented regulatory environment.
It does not mean building or owning national-scale capabilities, but instead introduces a more complex decision environment around how AI is selected, integrated and governed within national and sectoral frameworks.
In practice, most organisations will operate hybrid AI environments, combining global platforms, regional infrastructure and locally governed data in ways that respect data residency, regulatory oversight and domestic policy objectives, while remaining commercially viable.
Success will depend less on the choice of any single model and more on the ability to design flexible architectures, adapt operating models and work with partners that can navigate multiple ecosystems without compromising compliance or control. In this sense, sovereign AI is not a destination, but a constraint within which effective execution becomes the real differentiator.
To understand what this means in practice for enterprises, it helps to look at an earlier parallel: cloud computing. A decade ago, some enterprises were wary of the cloud for reasons strikingly like today’s AI concerns – data control, compliance and security.
The emergence of sovereign and regional clouds addressed these fears, enabling organisations to modernise while meeting regulatory requirements. Sovereign AI follows the same logic. By aligning advanced capabilities with national governance, it allows enterprises to adopt AI at scale without exposing themselves to what they deem as unacceptable risk.
There are, however, consequences. One is model fragmentation. As countries pursue their own AI strategies, forks will inevitably emerge. We are likely to see country- or region-specific models, each governed by different standards and ethical frameworks.
While this fragmentation may slow global convergence, it could also accelerate innovation within local contexts, particularly where models are tuned to sector-specific needs such as healthcare, finance or public services.
For governments with well-capitalised sovereign wealth funds, sovereign AI also offers a clear competitive advantage. In the Gulf, this is already taking shape.
The UAE’s Falcon LLM demonstrates how sovereign models can be developed and deployed at national scale, while Saudi Arabia’s HUMAIN initiative, including the ALLAM 34B Arabic large language model, shows how AI is being localised to serve strategic, cultural and economic priorities. Bahrain’s partnership with SandboxAQ to apply large quantitative models to biopharmaceutical research is another powerful example.
By leveraging AI to accelerate drug discovery and create proprietary intellectual property, the country is using sovereign AI capabilities to move up the value chain. Strategies like this are only possible when nations control both the models and the compute that power them. Early leadership in this space could prove defining for decades.
National AI strategies will also reshape the competitive landscape for AI providers. Sovereign AI initiatives tend to favour scale, security credentials and long-term viability, making it more likely that governments will partner with established players such as OpenAI, Anthropic or major hyperscalers. While this consolidation brings stability, it also risks squeezing out smaller innovators as niche capabilities are absorbed into broader platforms.
Avoiding monopolies in the AI race will require deliberate policy choices. Governments must balance the need for trusted, large-scale partners with mechanisms that sustain startup ecosystems, whether through procurement frameworks, regulatory sandboxes or targeted incentives. Without this, sovereign AI could unintentionally stifle the very innovation it seeks to protect.
As AI becomes embedded into national infrastructure, sovereignty will no longer be a fringe concern in certain regions around the globe. In 2026, the question for governments and enterprises alike will not be whether to embrace sovereign AI, but how to do so in a way that balances autonomy, innovation and global collaboration.
Those that get it right will not just adopt AI faster; they will shape the rules by which it evolves.





