As the world swoons over the promise and practice of AI and machine learning, the world of cybersecurity sees both the hero and the villain take part in the game.
For years now, artificial intelligence (AI) has peaked the curiosity of many in the world of science and technology. From Hollywood to academia, the concepts of AI and machine learning have generated both hype and intrigue, resulting in a world that stands divided over the potential implications and benefits of the true power of self-learning machines and technology.
In the world of cybersecurity, vendors are making a significant push towards AI and machine learning as the newest and most effective way to detect the latest threats and stay ahead of an ever-evolving cyberthreat landscape. The question is, is this all new?
The practice of building algorithms to differentiate bad computer behavior from good has essentially been the foundation of good cybersecurity software. From email spam filters to anti-virus solutions, AI and machine learning in cybersecurity has been around for well over a decade. But we have just scratched the surface.
Initial applications of AI have been systems that use machine learning or deep learning and analytics to recognise patterns, and classify threats and malware. It’s been used effectively to profile attack vectors that can be used to breach an organisation, baseline what ‘normal’ within an organisation looks like, and use advanced capabilities to enable rapid anomaly detection.
On the other hand, AI has led to a world where cybercrime is gaining new momentum. Hackers can use AI tools to find new vulnerabilities in an organisation’s network and create a new exploit, and attack in a fraction of time that wasn’t possible before. We have seen AI integration in many of the hacking tools that are sold widely on the black market, offering what is essentially a criminal franchise in a box. These toolkits allow people to easily deploy any kind of attack, because they are pre-engineered with all the analytics and information. So not only do AI and machine learning make it easier to launch an attack, but they are also making it easy to outline new opportunities and vulnerabilities—almost creating new
Additionally, if your current tools and tactics are reaping rewards, why should you invest in AI? It all boils down to return on investment. If a cybercriminal spends months building a malware toolkit, he wants to know how long he will be able to use it and how effective it will be. This means that on the attacker’s front, a significant portion of AI investments will go into driving targeted attacks. With cybercriminals having access to the same technologies the vendors do, the real game changer for organisations lies in their ability to effectively investigate and qualify threats so they can enable a targeted response to an attack or breach.
This is a stage of fuzzy logic and where the probabilistic nature of AI can really help. One might ask, how do you anticipate new attacks that you have never seen before? This stage requires the application of knowledge, a role that was typically filled by specialists who read or talked to their peers to build up profiles based on association. Today, because AI is still in its infancy, the recognition capabilities that we get do tend to have significant amounts of false positives. To overcome this, AI solutions of the future will have to cultivate the ability to learn the context in which it is operating to assess its confidence in the results it generates.
For cybersecurity professionals, the scenario goes beyond the evolution of the cyberthreat landscape. Meaning the attack surface will only continue to rise as enterprises across the world embrace new technologies like cloud computing and mobility to enable an era of digital transformation.
For instance, If you look at the economics of cybersecurity today, it’s shifting significantly towards the cyberoffensive. There is a whole lot more sharing of tactics and toolkits. If you look at attacks and malicious actors, the amount of technical knowledge and capabilities they require is dropping, and the barrier to entry for malicious actors is dropping significantly. If you look at the defensive side, the surface area is rapidly expanding. To facilitate business, we have to constantly find new ways of working and collaborating, enabling new data movement while protecting legacy assets. So, the skills required from a security analyst are increasing.
In fact, according to the latest forecasts from Cybersecurity Ventures, there will be 3.5 million unfilled cybersecurity jobs globally by 2021, indicating an immediate need for organisations to address an increasingly evident skills challenge to stay ahead of cybercriminals.
AI can only be as clever as the information it is given to learn from. Today, the data we use for threat detection and investigation is largely siloed, and qualification is built by association. Security is not a one-size-fits-all solution: what is normal behavior for a retail bank would not be normal behavior for an insurance organisation. If you understand what’s normal for your environment and what the baseline looks like, only then can you identify the anomaly and react to it in a timely manner. All that intelligence needs to be collated from various sources and fed into the system, so it can do the job effectively.
While it may be impossible for a human being to sift through all the data and manually monitor and respond to tens of thousands of daily threats, AI or machine learning cannot take away from the power of the human mind. For AI and machine learning to succeed in cybersecurity, there is a need for rapid adaptability and constant learning to build and maintain an army of skillsets and specialists to remediate and react to those threats.
With enterprises facing a double-edged sword driven by an increasingly sophisticated adversary and a rapidly expanding attack surface, the ability to harness the true potential of AI and machine learning couldn’t come sooner.
AI can help enterprises do more with less, allowing security specialists to focus their efforts and actions on proactive response and remediation to threats, instead of spending time on monitoring new signatures and threat activities. In this sense organisations need to scale their operations across all three stages of the security lifecycle. AI can help them respond quickly, make more accurate decisions, and have greater contextual knowledge of what the risk is, understand the scope of the risk, respond to it effectively, and act quickly. Knowing this, in the next 10 years, we will get significantly better at using AI to tap into data more effectively and learn better, which will lead to a world where the equilibrium between offensive and defensive will get better.