Global, Technology

Data Privacy Day marks shift to built-in trust with control, visibility and accountability

AI, identity, and sovereignty are redefining privacy as core digital infrastructure, with continuous control, visibility, and accountability replacing perimeter security and one-time compliance.

Every click, message, medical record, financial transaction, and AI interaction leaves a digital trace — and in 2026, protecting that trace is no longer just an IT concern. It’s a matter of trust, safety, economic resilience, and even personal freedom.

On this Data Privacy Day, the conversation has shifted dramatically. Privacy is no longer about locking data away or ticking compliance boxes once a year. Artificial intelligence now reads and recombines information across platforms in real time. Cloud systems stretch across borders. Digital identities — human and machine — access sensitive systems around the clock. In this environment, knowing where data lives is not enough. Organisations must prove who can access it, why they can, and whether policies are enforced every single moment. For businesses, privacy has become a foundation of digital trust and brand credibility. For governments, it underpins sovereignty and national security. For individuals, it determines whether personal information empowers opportunity or becomes a vulnerability exploited by scams, fraud, and manipulation. 

The front line has moved. The perimeter is no longer the network — it’s identity. Trust is no longer assumed — it must be demonstrated. And privacy is no longer a background function — it is built into the very architecture of how modern digital systems operate. This special feature explores how organisations are redesigning security, AI governance, identity controls, and workplace culture to meet a new reality: privacy as infrastructure, trust as evidence, and data protection as a shared responsibility in the age of AI. 

Across the region and globally, technology leaders are confronting the same reality: privacy now sits at the intersection of AI, cloud, identity, and regulation, demanding architectural change, continuous visibility, and shared accountability. From data platforms and network security to workforce culture and AI governance, organisations are rethinking long-held assumptions about where control resides and how trust is proven. The following industry perspectives reveal how this transformation is unfolding in practice — and what it takes to make privacy resilient, measurable, and sustainable in an AI-driven world.

Data Privacy Day is observed annually on 28 January to raise awareness about the importance of protecting personal information in an increasingly digital world.

The date marks the anniversary of Convention 108, the Council of Europe’s first legally binding international treaty on data protection, adopted in 1981. What began as a European initiative has grown into a global effort, with governments, businesses, and organisations using the day to promote privacy rights, responsible data practices, and greater transparency. Today, Data Privacy Day serves as a reminder that safeguarding personal data is fundamental to trust, security, and digital progress.

Voices from the Front Line

Bader AlBahaian, Country Manager, Saudi Arabia at VAST Data 
Organisations once treated privacy as an add-on, but AI has broken that model. Data now moves constantly across platforms, and strategies built on copying data, layered tools, or manual governance create immediate gaps. Privacy can no longer be separate from infrastructure — how data is stored, accessed, and shared determines whether protection works at all. Sovereignty isn’t just about where data sits, but who can access it, under what conditions, and with clear audit trails. Trust, too, must be continuous and measurable, not based on periodic reviews. Modern platforms must embed visibility and control, enabling organisations to prove protection without slowing innovation or business growth.

Dr. Emad Fahmy, Director of Systems Engineering, Middle East, NETSCOUT
Traditional perimeter-based security no longer works in cloud and hybrid environments where users, applications, and data operate everywhere. Implicit trust models leave blind spots that modern threats, including advanced DDoS attacks, readily exploit. Security must now be adaptive and driven by real-time visibility to detect anomalies before sensitive data is compromised. Zero Trust principles, continuous traffic analysis, and actionable threat intelligence are essential to protect data across cloud, on-premises, and edge environments.Organisations must move beyond reactive compliance toward continuous monitoring as regulations evolve across the Middle East. Hybrid security models that blend on-prem controls with cloud intelligence enable scalable protection, resilience, and innovation without operational friction.

Martin J. Kraemer, CISO Advisor at KnowBe4 for Europe & Middle East
Advanced analytics and generative AI are introducing privacy risks beyond traditional technical vulnerabilities. AI systems rely on vast datasets, increasing the likelihood that sensitive or regulated information could be exposed or misused, especially through third-party tools. Generative models may leak data unexpectedly, while attackers use AI to create more convincing phishing and social engineering attacks, heightening human risk. Privacy is no longer just an infrastructure issue — it requires human awareness and responsible behaviour. Organisations must move from one-time compliance to a shared culture of accountability, supported by clear policies, ongoing education, simulations, and ethical data practices, ensuring employees make informed decisions as AI-driven data use expands.

Gabriele Obino, Vice President, Southern Europe & Middle East at Denodo
Saudi Arabia’s data privacy landscape has evolved from basic compliance to strategic governance, driven by the Personal Data Protection Law and alignment with Vision 2030 and SDAIA’s AI strategy. Privacy is now viewed as competitive capital, embedded into core operations rather than treated as a regulatory burden. Data sovereignty extends beyond location to continuous oversight of who accesses data, under what conditions, and for lawful purposes, even across multi-cloud environments. Organisations are adopting policy-driven governance where controls travel with the data. Trust must be evidence-based, supported by auditable logs, data minimisation, and continuous monitoring, enabling enterprises to prove responsible data stewardship without limiting innovation.

Matt Gregory, Senior Director – Strategy at Dubizzle Group. 
Privacy is fundamental to digital trust and brand reputation. As a marketplace where people make important life decisions like buying homes, cars and finding jobs, users trust our platform with their sensitive and personal information. Protecting that data is not just a regulatory responsibility; it’s a promise we make to our community. Features like Verified are designed with privacy at its core, ensuring we deliver safety and convenience while boosting trust. Transparency, data minimisation and strong safeguards help us build long-term confidence with our users. In an increasingly digital economy, trust is earned through consistent, responsible data practices and organisations that prioritise privacy will be the ones that build lasting relationships and strong resilient brands.

Bernard Montel, EMEA Field CTO, Tenable
This Data Privacy Day, protecting personal data is about more than compliance; it’s about defending freedom and privacy. Data leaks are causing real-world harm as scams and extortion exploit exposed information. With cybercriminals weaponising AI, attacks are becoming faster, smarter and harder to detect. At the same time, companies are adopting agentic AI, introducing a new risk: digital identities acting independently within sensitive systems. Effective governance now demands visibility into machine behaviour, not just human access. To combat these emerging challenges, businesses must invest in identity governance. Compliance should also be the baseline, with prevention and resilience built in from day one.

Keyur Shah, Associate Field CISO, Sophos
Data Privacy Day highlights a regional shift from compliance to privacy by design, where trust underpins the digital economy. Privacy now centres on identity protection, not just data storage. With PDPL enforcement, stronger rights frameworks, and a focus on sovereignty, expectations for safeguarding personal data are rising. Cybercriminals increasingly target individuals through scams, impersonation, and social engineering to hijack accounts and access sensitive information. Organisations must strengthen identity and access controls, reduce credential exposure, and maintain continuous monitoring and rapid response. Combining prevention with AI-driven detection and 24/7 security operations helps stop identity-based attacks early, because today an identity breach quickly becomes a privacy breach.

Chris Cochran, Field CISO & Vice President of AI Security at SANS
AI is reshaping how data is used, and protection must evolve accordingly. Organisations should control whether their websites are used to train AI by restricting crawlers through tools like ai.txt or agent access controls, particularly for corporate, customer, or sensitive content. Caution is also needed with AI-powered browsers and autonomous agents, which may expose information through prompt injection or unintended context sharing. Convenience can quickly turn into risk. Data minimisation remains essential: share only what is necessary, avoiding full documents, datasets, or personal identifiers. With AI systems retaining context longer than expected, limiting exposure at the source remains a critical privacy safeguard.

Meriam ElOuazzani, Regional Senior Director, Middle East, Turkey and Africa, SentinelOne
AI and advanced analytics are now changing how companies think. AI processes work best with large, varied datasets. This increases data value but is also a gateway to bias, misuse, or unintentional exposure. So, privacy-by-design and privacy-by-default should now be embedded from the very first step of an organisation’s data architecture. Explainability, constant supervision, and governance of data are important aspects of modern privacy . This helps businesses understand the data and how AI models make decisions from it. Also, advanced analytics recognises unusual activities, predicts breaches, and automates compliance. Again, the very same tech can increase the risk if controls are weak. We need to work on creating safe, clear, and accountable AI that helps to innovate with credibility, ethics, and compliance. 

Morey Haber, Chief Security Advisor at BeyondTrust
AI and advanced analytics have fundamentally reshaped data privacy strategies by increasing both capability and data privacy risk itself. Advances in technology now allow organisations to identify sensitive data faster, classify it more accurately, and detect potentially malicious access at scale. This now allows cybersecurity privacy controls to become more dynamic, context aware, and adaptable, based on the true intent of developed policy. Unfortunately, in order to accomplish these goals, AI demands vast amounts of data. This information is often repurposed and not securely stored or processed beyond its original intent. This amplifies potential exposure and extends regulatory compliance to systems outside of established scopes. As a result, data privacy has shifted from simple data protection to new technology that may be processing sensitive information across an entire digital ecosystem. 

Harun Baykal, Head of Cybersecurity Practice, Middle East and Africa at NTT DATA
Privacy is becoming less of a legal “tick-box” and more of a daily practice in —how teams build products and decide what data they truly need. The biggest shift right now is how AI changes the privacy equation. The risk isn’t only leaks; it’s what models can infer and how quickly data gets reused across tools, prompts, and pipelines. That’s why strong privacy programs now include clear rules for AI use, tighter control over training data.  The best organisations don’t treat privacy as a brake on innovation. They keep it simple: collect less, keep it for less time, limit access, and prove the controls work. In 2026, “shadow AI” will be the trend topic —employees using consumer tools or agents with access to sensitive data without oversight will be the concern. When teams take it seriously, privacy protects trust, which is hard to win back once it’s lost. 

Gerald Beuchelt, CISO at Acronis
In 2026, a major privacy risk is the growing expansion of surveillance across the digital ecosystem, where personal and behavioural data is continuously collected, analysed, and correlated. The threat lies not in a single breach, but in ongoing exposure, profiling, and loss of control over how data is used. This risk is intensified by compromised identities, unpatched systems, and misconfigurations that make sensitive information widely accessible, while AI-driven automation accelerates data aggregation. Exposure also extends through partner ecosystems, as many organisations rely on service providers. Strong governance, disciplined operations, and privacy-aware service delivery are essential to protect data responsibly across the entire value chain.

Roy Horgan, Vice President, Privacy Officer at Qlik
AI is reshaping operations, making privacy essential for trust and effective outcomes. Systems must rely on well-governed, high-quality data, supported by privacy-by-design and impact assessments, particularly when personal information is involved. Such practices ensure ownership clarity, visibility, and controlled access. With AI becoming more autonomous, governance needs to be embedded from the outset rather than added later, improving scalability as AI agents act on behalf of teams. Privacy, transparency, and strong governance form the trust layer that enables dependable AI at scale. Without them, adoption slows, confidence erodes, and the value of AI remains limited.

Sergio Gago, Chief Technology Officer, Cloudera
Data Privacy Day highlights the growing priority of privacy for regional technology leaders. PwC’s Global Digital Trust Insights Middle East Findings 2025 shows 40% now rank data protection as a top investment, signalling that trust must sit at the core of innovation, not just compliance. With the rise of LLMs and AI agents, sensitive data is increasingly used in training and testing, raising the risk of exposure, even with good intentions. Synthetic data offers a solution by mimicking real datasets without using actual records, supporting model development and evaluation while reducing privacy risks. However, it requires disciplined engineering, clear purpose, and governance to avoid unintended disclosure and ensure secure innovation.

Mathieu Chevalier, Principal Security Architect at Genetec
Physical security data can be highly sensitive, and protecting it requires more than basic safeguards or vague assurances. Some approaches in the market treat data as an asset to be exploited or shared beyond its original purpose, creating real privacy risks. Organisations should expect clear limits on how their data is used, strong controls throughout its lifecycle, and technology that is designed to respect privacy by default, not as an afterthought.

 

 

 

 

 

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines