
In 2025, an estimated 181 zettabytes of online content were generated every day. Some estimates suggest that by the end of 2025, 90% of all online content was AI-generated. The exact figures may be debated, but the trend is unmistakable: our information ecosystem is being rapidly saturated with synthetic content. The worrying fact that synthetic content is increasingly indistinguishable from reality and is exponentially more prevalent, makes it more difficult to distinguish facts from fiction at necessary scale and speed.
It has never been easier or cheaper to manufacture misinformation. A fabricated mass-casualty car accident or a fire affecting an office building in a major city, illustrated with AI-generated ‘photos’ and ‘videos’ can be posted online in minutes. Within moments, it can be reshared thousands of times. This matters because social media is now the primary news source for many. Within the Middle East, more than half of consumers use social media as their primary news source. According to the Reuters Institute, 54% of the population in the United States uses social media as their main source of news over traditional news channels. Similar trends are being seen in the UK and beyond. More people are now turning to AI chatbots for news, search, and clarifications, with usage among those under 25 nearly double that of the general population.
False content tends to be more dramatic, emotional, and clickable, which makes it travel faster than verified information. Research shows that people are more likely to share falsehoods than facts, driven by low digital literacy, the lure of going ‘viral’, and the pull of personal or political biases.
Meanwhile, we’re seeing the makeup of the workforce changing. Research conducted in several western countries89 shows approximately 80% of Gen Z workers between 18 and 21 report using AI tools to complete over half of their tasks. As younger generations enter the workplace, information habits and expectations will shift dramatically. Organisations will need to adapt, not only to manage productivity and ethical use, but to ensure employees understand the risks posed by AI, including misinformation. Mandatory training and an agile, information-focused approach to crisis management will be essential for organisations to stay prepared.
Digital literacy
As artificial intelligence quietly weaves itself into daily life, most people barely realise how much they already rely on it. Gen Z constitute some of the most avid users of social media and gen AI tools. Globally, AI adoption is uneven: some countries, industries and organisations, are far slower to adapt the new technologies and tools. This unevenness leaves gaps in understanding, oversight and the ability of governments’ and organisations’ to respond rapidly to the changes in AI’s capabilities and implement rules and guidelines against the potential threats they may pose.
In short, technological advancement is rapidly outpacing some user understanding, as well as governance. Organisations and governments that lag in adoption face another challenge: they may grow dependent on tools they don’t fully understand and cannot properly monitor or secure, leaving the door open to misuse. Furthermore, where digital literacy is low and AI literacy is even lower, individuals are more susceptible to believing in and acting on fabricated or distorted content.
The impact on organisations from having workforce with limited digital and AI literacy can be harmful and costly. This impact can vary from reputational risks (for example, to organisation’s image, or that of its employees and management), speed and accuracy of decision-making, to culture and environment in a workplace.
During crises, misinformation can spread faster and ahead of verified updates. If organisations lack the capability to quickly access verified and accurate information on the crisis and aren’t able to rapidly communicate with their employees using that reliable information, they risk making poor or belated decisions, as well as being late to responding to crisis, among many other risks.
Advice
For organisations, this trend cannot be viewed as a temporary or episodic issue, or one that is not relevant for them, irrespective of their industry and geography. Instead, it demands sustained investment in crisis management planning, with a particular emphasis on monitoring and collecting information relevant for the organisation (as well as its employees, the relevant locations, etc.), verification processes, and clear communication protocols. Organisations should actively monitor developments in AI, regularly review and update internal policies on the use of AI, and equip their people with the knowledge and training needed to use and critically assess AI tools responsibly. Trainings for employees should also include courses aimed at improving digital and AI literacy, and basic skills to identify mis and disinformation. Only by treating this as a long-term strategic priority can organisations remain resilient in an information environment increasingly shaped by synthetic content.
What can you do?
Establish plans to manage misinformation and disinformation, including updating workforce regularly on unfolding developments. Closely monitor the spread of viral messages promoting violence, propaganda or conspiracy theories targeting specific demographics and assess whether this raises additional risks to workforce, depending on their specific profiles e.g. their ethnicity, nationality, travel history and religion.
Evaluate the short-, medium- and long-term impacts of these threats in the prevailing security environment in locations of operations. This includes assessing how disinformation and misinformation could amplify new and existing security issues and impact your organisation’s ability to operate effectively.
Develop a strong and reliable network of contacts on the ground who can bridge information gaps. Consider using trusted independent experts, and compiling lists of trustworthy news sources or credible social media accounts to support information-gathering.
Conduct regular business-continuity and crisis-management exercises that deal with misinformation and disinformation playing a role in a crisis.
Establish clear communication channels for employees to report suspicious information. Ensure there is a process for verifying and addressing these reports promptly.
This opinion piece is authored by Gulnaz Ukassova, Security Director, Information & Analysis, International SOS and Baani Gambhir, Lead Security Analyst, Threat Monitoring, International SOS.





