Interviews

Security without accountability is new risk, says Secure.com’s CEO and Co-founder 

Uzair Gadit, CEO and Co-founder of Secure.com.

Uzair Gadit of Secure.com discusses AI-driven cyber threats, the urgent policy gap around autonomous systems, and why explainable, auditable execution must replace black-box automation.

AI has changed the balance of power in cybersecurity. Attackers are no longer constrained by manual effort or predictable tooling; they are leveraging machine speed, synthetic identities and generative models to exploit trust itself. For many organisations—particularly SMEs—defence has struggled to keep pace, often relying on fragmented tools, rule-based detection and automation that lacks clear governance.

Secure.com was built to address that imbalance. Designed as a portfolio of Digital Security Teammates mapped to core security functions—SOC, compliance, identity and exposure—the platform operates on a shared automation and integration backbone. Transparency and governance sit at its core. Every action runs within defined policies, separation-of-duties controls and approval gates. Every recommendation is explainable. Every decision is auditable, traceable and reversible—creating a “flight recorder” for security operations.

Rather than introducing opaque, black-box automation, the model focuses on governed execution with provable outcomes: faster response times, stronger identity hygiene, continuous evidence collection and security that can be trusted at scale.

Uzair Gadit, CEO and Co-founder of Secure.com, shares his perspective on AI-driven threats, accountability gaps, GCC risk exposure and the guardrails required as autonomous systems become embedded in enterprise environments.

Interview Excerpts:

How do AI-driven attacks that mimic human behaviour change traditional approaches to cyber detection and defence?
AI-driven attacks that mimic human behavior are breaking traditional cyber detection models. Today’s attackers now use impersonation, deepfakes, and AI-generated social engineering to blend seamlessly into trusted workflows—posing as executives, vendors, or trusted employees across email, voice, video, and collaboration tools. These attacks don’t trigger typical indicators like malware signatures or anomalous traffic; instead, they exploit trust, timing, and context. 

As a result, perimeter- and rule-based defenses increasingly fall short. Detection must shift from “what is happening” to “who is doing it, and why”. That means building behavioral baselines, identity-aware controls, and continuous verification of intent—not just credentials. 

At Secure.com, we’ve designed systems to correlate identity, asset context, behavior, and risk in real time, allowing AI-driven actions to be governed, explainable, and reversible. With human-in-the-loop controls and deep observability, it detects impersonation and synthetic threats that look “legitimate” on the surface—before they turn into business-impacting breaches. 

“When attackers act human, defense must understand behavior better than credentials alone ever could.” 

When an AI agent triggers an exploit, who should be held accountable—and how urgent is the policy gap around AI responsibility?
The core issue is simple: the moment software can take action autonomously, governance becomes the product—and accountability can’t be outsourced to “the model.” 

When an AI agent triggers an exploit, the failure is rarely the model itself—it is a governance failure. AI must be treated as a high-risk operational system, subject to clear guardrails, approval thresholds, monitoring, and auditability. Organisations are expected to anticipate misuse, constrain autonomy, and prove that AI-driven actions were supervised, explainable, and reversible. Managing AI risk is now part of enterprise risk management, not an experimental side effort. 

Responsibility ultimately rests with the organisation that designs, deploys, or fails to govern the AI agent. Under current U.S. law, AI has no legal personhood. Regulators will ask whether the risk was foreseeable and whether reasonable controls were in place. Simply saying  “the AI acted on its own” won’t shield liability. 

In the GCC, there’s no unified AI liability framework yet, but enforcement is already happening under existing laws. The policy gap around AI-specific responsibility is urgent and narrowing: guidance like the NIST AI Risk Management Framework and federal executive actions signal that fair, accountable use of AI is now an enforceable expectation, not a future aspiration. 

Why are GCC organisations, especially SMEs, more exposed to AI- and LLM-driven cyber threats today?
SMEs across the GCC remain disproportionately exposed to AI- and LLM-driven cyber threats largely because cyber risk still isn’t treated as a core business risk. IBM X-Force reports the Middle East accounted for 10% of global attacks in 2024 (the 4th most-targeted region), and Kaspersky reported the Middle East ranked first for ransomware rate in 2024—nearly twice the global average. 

According to the IBM Cost of a Data Breach Report, the average breach now costs millions and increasingly leads to stalled deals, regulatory scrutiny, and long-term trust erosion—not just IT disruption. Yet many SMEs still view cyber incidents as technical issues, rather than revenue or survival risks. 

Second, security remains under-invested. Budgets prioritise growth and automation, while identity controls, monitoring, and response maturity lag behind—exactly what AI-driven impersonation and social engineering exploit. 

Third, baseline security checklists are often missing. Asset inventories, access reviews, and incident playbooks only appear when demanded by customers or auditors. 

Finally, enforcement is inconsistent. With limited AI-specific accountability laws across the region, many organisations act only when compliance is enforced—by which point AI-powered attacks have already moved faster than policy. 

Why do legacy security tools struggle to detect LLM-driven API abuse?
Legacy security tools struggle with LLM-driven API abuse because they were designed to detect technical anomalies, not the misuse of valid, authenticated behavior.. WAFs, SIEMs, and rate-limiters look for spikes, malformed requests, or known bad signatures. But LLM-driven abuse does the opposite: it generates valid, authenticated, business-logic-aware API calls that operate within expected limits and timing. As a result, the activity often looks entirely routine to legacy detection systems. 

The failure isn’t speed or scale—it’s context blindness. Traditional tools observe requests in isolation. They don’t understand who is calling the API, what the asset represents to the business, why the sequence of actions matters, or what downstream impact those calls create when chained together. As a result, abuse is only recognised after damage occurs.

“Defending against machine-driven attacks requires machine-scale reasoning—systems that can continuously evaluate intent, correlate behaviour across identities and assets, and surface risk in real time, rather than relying on static rules or human review alone.” 

How critical are agent-level guardrails and observability as autonomous AI becomes embedded in enterprise systems?
As autonomous AI shifts from advisory roles to executing decisions inside enterprise systems, the risk profile changes significantly. These systems don’t just analyse data; they trigger actions, modify environments, and influence outcomes at machine speed. 

Without explicit guardrails, agentic AI introduces opaque decision-making, uncontrolled escalation of errors, and ambiguous accountability. Guardrails define what an agent is permitted to do, when it must escalate, and where human approval is mandatory—effectively serving as the operating boundaries for digital workers. 

Equally critical is observability. Enterprises must be able to reconstruct why an agent acted, what data and policies informed the decision, and who approved or accepted the risk. This level of traceability is essential for audits, regulatory compliance, and internal trust.  In an agentic future, governance is not a layer added later—it must be embedded by design. 

 

 

 

 

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines