IT security seems to revolve around a calendar of events, marked by milestones such as Patch Tuesday, Cyber-Security Awareness Month, and staple events such as Black Hat and RSA. But we all know that the status of cybersecurity will only improve once we are able to move from an event-driven, reactive security programme to one marked by processes and systems that reflect the persistent nature of the threat we face. Our enemy operates from safe havens present in the Internet, and takes advantage of automated tools and a growing cybercrime economy. Thus our calendar is irrelevant to this adversary.
Awareness of the need for ‘continuous security’ has been growing due to the recent sharp increase in the frequency and cost of attacks. The UAE’s National Electronic Security Authority declared cybersecurity as one of the biggest economic and national security challenges facing countries in the 21st century. One of the biggest attacks to happen in the region was the Shamoon malware attack that cost Saudi’s Aramco $15 million in losses in 2012.
Hackers are becoming more and more sophisticated with common cyber-attacks now including distributed denial of service (DDoS) attacks, phishing/spear-phishing emails, data theft, ‘zero-day’ software assaults, web application exploits, and website defacement.
So how did we get here? This could be because we have been slow to evolve out of the ‘batch mentality’ that has historically dominated security practices. This has caused us to focus on cycles that are measured in months. But thanks to newer consumer platforms such as the iPhone and Android, continuous update cycles have become well-known and advocates of the faster cycles of monitoring and remediation are gaining more attention.
Of course, the issue has not been a lack of desire, but rather the real constraints of time, people and money. Using traditional security appliances, we would exhaust all of our staff’s time before we came close to complete coverage. These tools were architected during the era of the network as a walled castle, and just do not scale to a dynamic world of shadow IT, cloud applications and mobile devices.
But these obstacles are being overcome. First, new services deliver security from the cloud, which reduces the reliance on specialised equipment and centralises management to a single point, allowing global implementations with a minimum of setup. The cloud also allows you to probe from the outside, just like an attacker, and to discover and evaluate assets outside the corporate firewall, including those provisioned without your knowledge.
Second, rules-based automation is being incorporated into an increasing number of products and services. This shift to real-time alerts rather than batch reports is critical if we are to avoid drowning in the data created by pervasive and frequent security monitoring.
The third key trend derives from the first two – analytics-based intelligence. Pervasive and frequent scanning enabled by the cloud and alerts give us a treasure trove of data, and to this we can apply sophisticated statistical techniques. Information about assets, vulnerabilities and threats can be aggregated anonymously across a customer base, and then correlated to identify patterns that indicate trouble spots more quickly, so we may prioritise action in a more automated and intelligent way. In the same way that we use crowd-sourced ‘Big Data’ to predict commute traffic, we can use the law of large numbers to get us a step ahead of the cybercriminals.
So while reactive security has been necessary in a world marked by insufficient resources and antiquated equipment, new tools and techniques allow us to improve our visibility and responsiveness to threats. I encourage companies to consider the steps that they might take to move to a more continuous model of security in the near future.