Industry voices highlight why trust, governance, and smarter identity protection will determine the future of digital safety.
Digital lives are increasingly shaped by AI, automation, and always-on connectivity, making Safer Internet Day a timely reminder that online safety is no longer just a technical challenge — it is a shared responsibility. From organisations deploying intelligent systems to individuals navigating everyday digital interactions, the choices made online now have far-reaching implications for privacy, trust, and security.
Cybercriminals continue to evolve at a pace, using automation and generative AI to exploit human behaviour rather than technical vulnerabilities. This reality places greater emphasis on awareness, identity protection, and informed decision-making, particularly as children, employees, and businesses rely more heavily on digital platforms for learning, work, and communication.
Industry voices:
Chris Cochran, Field CISO & Vice President of AI Security at SANS
“AI is incredible. It’s one of the biggest force multipliers businesses have seen in decades, and it’s already changing how teams operate, create, and compete.
But like any powerful technology, using AI well starts with understanding where the risk actually lives.
The first step is visibility. Organisations need to understand where AI is being used across the business, not just the tools leadership approved, but the AI showing up in workflows, plugins, agents, and third-party platforms. Maintaining an AI inventory is quickly becoming table stakes.
From there, third-party risk matters. An AI Bill of Materials (AIBOM) helps organisations understand what models, data sources, and dependencies sit under the hood, and where external risk is being introduced.

As AI agents become more autonomous, we also need to change how we secure them. Agents should be treated like operators on the network, not traditional service accounts. That means giving agents an identity, even if it’s short-lived. In fact, ephemeral authentication and authorisation is often preferred. Technologies like SPIFFE can help enable this kind of machine identity at scale.
Zero Trust principles still apply. Authenticate explicitly. Grant least privilege. Assume breach. Monitor agent behavior continuously and use segmentation so that if something does go wrong, the blast radius is limited.
Finally, keep identity and access graphs top of mind. Understand what your people can access, what your agents can access, and where those paths intersect. Most AI-related data exposure doesn’t come from the model itself, but from overly broad permissions and invisible access paths.
AI can absolutely be a game-changer for businesses. The organisations that win will be the ones that pair innovation with discipline and build safety into how AI actually operates day to day.”

Ezzeldin Hussein, Regional Senior Director, Solution Engineering, META, SentinelOne
The theme “Smart tech, safe choices” reminds us that we need to place importance on making the right decisions. As children and teens use AI tools frequently, the choices they make determine their safety more than filters or settings. While supporting creativity and learning, AI can also influence behaviors, opinions, and trust subtly. This is where we need digital safety. Children should question what they see, think twice before sharing anything, identify misinformation, and protect their personal information. They use technology in an ethical manner when we provide them with information and cyber awareness. Making intelligent decisions for both humans and machines is the only way to stay safe in a future driven by AI.
John Shier, Field CISO Threat Intelligence at Sophos
The way attackers are using automation and generative AI to massively increase the speed and volume of their attacks suggests that attacks will become faster and more sophisticated. The best approach to protecting our identities and digital data is to take a proactive stance on defense.”
“Criminals are increasingly targeting people rather than devices, and this trend is expected to continue and even accelerate. Once again, AI is being used as a weapon to create highly detailed phishing lures to entice people to disclose passwords or financial information through well-designed emails, text messages, and WhatsApp messages.”
On Safer Internet Day (February 10), a day dedicated to raising awareness about digital usage, Sophos, a global leader of innovative security solutions for defeating cyberattacks, shares its advice to internet users to ensure continuous protection of their credentials.

According to the upcoming Sophos Active Adversary Report, compromised credentials were the leading cause of attacks (42.06%) in 2025. This is a strong trend that continues to dominate the scene, with cyber attackers demonstrating ever-increasing ingenuity and relying on new tools to compromise the security and privacy of internet users.
Many websites offer the option of using an “authentication app,” a smartphone app that displays a unique code for a short period of time, which must be entered after the password, making it much more secure than simply using a password.
Better still, there is a new solution called “passkeys,” which generally uses biometric authentication on your smartphone (face scan, fingerprint) to log in without a password. This is the best choice when available.

Salman Kazmi, Area Vice-President, META, BMC Helix
The discussion needs to change from experimentation to responsibility as AI becomes increasingly integrated into business operations. Only when smart technology is implemented with established boundaries for data, security, and governance does it become useful. At BMC Helix, we think that transparent, human oversight, and securely designed architectures are the first steps towards responsible AI.
Businesses require trustworthy AI, not just more of it. This means safeguarding sensitive data, coordinating automation with business objectives, and making sure AI enhances human accountability rather than replacing it. Our goal is to provide scalable, compliant, and enterprise-specific agentic and generative AI capabilities.
The safe, moral, and efficient adoption of AI will determine its future, not how quickly it is implemented.

Ahmad Shakora, Group Vice President – META, Cloudera Middle East
Safer Internet Day 2026 underscores how the focus has shifted from password hygiene to responsible AI governance. Cloudera’s Enterprise AI Survey reveals that 59% of IT leaders in the UAE have integrated AI into core processes, yet only 16% say it is fully embedded — highlighting a clear maturity gap.
As organisations increasingly operate as data-driven businesses, expanded AI usage also widens the attack surface. A single breach today does more than trigger penalties; it erodes brand trust. Reactive security models are no longer enough in environments where AI continuously processes data across cloud, on-premises, and edge systems.
Private AI offers a path forward. By keeping model inputs and outputs within the enterprise environment — bringing compute to the data — organisations can innovate without compromising compliance.
Visibility remains critical. Enterprises cannot govern what they cannot see, making unified data platforms and clear lineage essential. Ultimately, security must be embedded by design. The organisations that succeed will treat AI as private, governed, and secure from the outset.





