News, Opinion

Why data quality, not algorithms, will decide future of AI, says Core42 official

Rajeev Nair, Chief Delivery Officer, Core42.

Across sectors, AI programs are faltering not because models lack capability, but because data environments remain fragmented. 

As artificial intelligence becomes embedded into the operating fabric of enterprises and governments, a sharper question is emerging: what will actually determine whether AI succeeds at scale? 

Despite constant debate around model performance, speed, parameters, and benchmark scores, the industry is still looking in the wrong place. The decisive constraint is no longer algorithmic capability. It is whether the underlying data can be trusted. 

That trust breaks down in familiar, structural ways. Incomplete or unrepresentative datasets hard-code bias into automated decisions. Duplicated records create contradictions that derail integration and training. Outdated information rapidly loses relevance. And data trapped in silos prevents AI systems from forming the coherent, unified view of reality they are meant to interpret. 

These failures do more than reduce accuracy; they erode confidence. When data integrity is weak, AI outputs become harder to explain, harder to validate, and ultimately harder to adopt. In the real world, that trust deficit becomes a deployment ceiling long before model sophistication becomes a differentiator. 

The consequences are already visible. Gartner projects that through 2026, 60% of AI initiatives will be abandoned due to the absence of AI-ready data, as projects stall at pilot stage or fail to deliver measurable business value. Across sectors, AI programs are faltering not because models lack capability, but because data environments remain fragmented, poorly governed, and increasingly misaligned with regulatory and sovereignty requirements.

“International studies consistently point to the same pattern: data quality and integration failures are the dominant causes of unsuccessful deployments.” 

The conclusion is clear: governance and data integrity, not algorithms, will determine who scales AI successfully. 

For most of the last decade, AI progress was framed as an algorithmic race. That focus delivered extraordinary breakthroughs, but it is now producing diminishing returns. Systems built on weak data foundations may appear impressive in controlled settings, yet fail under real-world complexity, regulatory scrutiny, or operational scale. 

Today, the competitive advantage is shifting. Sustainable AI growth requires a different foundation: models deployed in environments where data quality is measurable, access is governed, lineage is traceable, and accountability is built in. In this new phase, organisations will not be separated by who has the most advanced model, but by who has the most AI-ready data estate. 

How Organisations Are Responding
In response to escalating data risk and accelerating AI adoption, organisations can no longer afford to manage unstructured data through fragmented, siloed initiatives. Discovery, governance, security, and infrastructure modernisation must operate as a single model, not as disconnected programs that react after issues surface. The era of static policies and one-time clean-ups is over. What is required now is continuous visibility, policy-driven control, and lifecycle accountability embedded directly into everyday data operations. 

 This shift requires more than governance frameworks; it requires operational capability. Organisations must combine deep data discovery with automated governance and action, enabling them to continuously identify redundant, obsolete, and sensitive data, enforce retention and access controls, and securely migrate, archive, or retire information across on-premises and cloud environments. 

Just as critically, accountability must move closer to the business through role-based transparency and self-service workflows, while centralised guardrails ensure consistent compliance, security, and regulatory alignment at scale. 

Without this transformation, unstructured data will remain a growing source of operational risk, regulatory exposure, and uncertainty in AI-driven decision-making. Organisations that act decisively can turn this challenge into advantage, converting unstructured information into a governed, reliable asset as it moves and evolves. Those that delay will find themselves addressing yesterday’s issues while tomorrow’s risks accelerate. 

A Middle East Model for Responsible AI
As AI moves from experimentation to large-scale deployment, the Middle East is emerging as a distinctive leader through a deliberate, resilient approach that treats data governance, sovereignty, and quality as foundational capabilities, not afterthoughts. 

This shift is already measurable. PwC reports that 30% of Middle East organisations plan to implement responsible AI practices in the next 12 months, reflecting a growing recognition that trust, governance, and regulatory alignment will determine who can deploy AI sustainably. 

 By embedding accountability and operational discipline into their data environments, organisations across the region are building AI systems designed to perform under real-world complexity. In doing so, the Middle East is establishing a credible blueprint for responsible AI at scale, one grounded not in short-term model advantage, but in strong data foundations that make trust possible.  

 This opinion piece is authored by Rajeev Nair, Chief Delivery Officer, Core42. 

 

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines