
Kevin Bocek, Senior Vice President of Innovation at CyberArk, explains why identity security will define governance, resilience and digital trust in 2026 as AI agents and automation reshape corporate decision-making in the Gulf and beyond.
Organisations across the Gulf are accelerating AI adoption and automation, making identity security the defining control plane for digital trust. Kevin Bocek, Senior Vice-President of Innovation at CyberArk, spoke to Tahawultech.com on how AI agents, machine identities and board-level governance could reshape cybersecurity priorities by 2026. From the potential role of AI agents in corporate decision-making to the growing risk of runaway automation, Bocek explains why identity — both human and non-human — now sits at the core of resilience, accountability and fiduciary responsibility.
Bocek also outlines the priorities CISOs and boards must address today to secure machine identities at scale, as cloud, AI and autonomous systems transform the Gulf’s digital ecosystems.
Interview Excerpts:
You’ve predicted that shareholders may soon appoint AI agents to corporate boards — what signals make this shift realistic for 2026, and what governance risks does it introduce?
The prospect of AI agents joining corporate boards by 2026 is becoming realistic due to two factors: the rise of autonomous AI agents capable of reasoning and acting independently, and their deep integration into corporate data streams. These agents can already analyse complex financial and legal information, producing auditable insights faster than human teams. This creates opportunities for shareholders to use AI to drive more data-driven, transparent governance. However, AI board agents also introduce significant risks, particularly around legal accountability, fiduciary duty and data security. Questions remain over liability when AI-driven decisions cause losses, while granting agents access to highly sensitive board data increases insider risk and makes strong machine identity management and access controls essential.
What cybersecurity trends or threat patterns do you expect will define 2026 as AI-driven automation accelerates?
Machine identity threats will accelerate as automation expands, driven by the explosive growth of non-human credentials such as certificates and API keys. Organisations now manage 82 machine identities for every human, yet many remain poorly governed, with privileged access often overlooked. The issue will peak in 2026 when Microsoft, Google and Apple shorten TLS certificate lifespans, triggering widespread outages as mismanaged certificates expire and critical systems go offline. At the same time, AI will further expand the attack surface, particularly through the rise of “runaway” AI agents. Poorly secured agents, misconfigured identities or leaked API keys could enable a single rogue agent to spread rapidly across systems. The defining security challenge will be ensuring every AI agent has a unique, revocable identity, making identity governance the only true kill switch in an automated world.
How will board expectations of CISOs evolve next year as identity-centric attacks and machine-led decision-making increase?
Boards will increasingly see the CISO as a strategic risk advisor, not just a compliance leader, responsible for safeguarding digital trust. As identity-centric attacks grow, security focus is shifting from the perimeter to protecting every human and machine identity. CISOs are already warning boards about unavoidable third-party risks, such as the TLS certificate mandates from Google, Apple and Microsoft, where a single failure can disrupt operations and damage brand value. The rise of AI agents further elevates governance expectations.
“CISOs will be held accountable for proving that machine and AI identities are properly governed under a zero-trust model, with secure access to all corporate secrets becoming a core board-level responsibility.”
What must Gulf organisations prioritise today to secure machine identities at scale as they expand AI, cloud, and automation initiatives?
Gulf organisations must prioritise securing machine identities, which underpin AI, cloud, and automation growth. This starts with strong secrets management to eliminate static credentials and automate the rotation of short-lived keys and tokens across multi-cloud and on-premise environments. A zero-trust approach is essential, assuming breach by default and granting access based on verified machine identity, context, and least privilege, with no standing access to critical systems. Centralised visibility and governance over all machine identities are also critical to maintain compliance, resilience, and security at the speed of modern automation.
How can the Gulf enable aggressive AI innovation while ensuring identity security remains uncompromised across digital ecosystems?
To accelerate AI innovation securely, Gulf organisations must embrace a machine identity-first strategy. Since AI models and automated systems are driven entirely by non-human credentials, security must be built into the CI/CD pipeline. This involves implementing automated secrets management for all service accounts used by AI and ML workloads. This security automation will help ensure that aggressive digital expansion does not compromise identity security across AI, multi-cloud, and hybrid environments.





