The frenetic pace at which artificial intelligence (AI) has advanced in the past few years has begun to have transformative effects across a wide variety of fields. Coupled with an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has turned its eye to AI and machine learning (ML) in order to detect and defend against adversaries. The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor, but importantly, it also enables the discovery of attacks that would have otherwise been undetectable by a human.
So, is it time to hand over security operations responsibility to the machines? In February 2014, Martin Wolf wrote a piece for the London Financial Times titled Enslave the robots and free the poor. He began with the following quote:
“In 1955, Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his hos0074 asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”
The fundamental tension Wolf points out between labour and automation has always existed. It not only makes up a large portion of academic literature on political economy (think Karl Marx and Adam Smith), but also ignites many of the worlds labour struggles. The Luddites for example, were a movement of textile workers and weavers who opposed the mechanisation of factories ― not, as they are famously depicted, because they were opposed to machines in principle, but because it led to the exploitation of workers who were not allowed to share in the profits made from the increased productivity that resulted from the automation.
Our current era of technological expansion has given birth to a variety of new tensions resulting from AI and machine learning. Nevertheless, the most pressing question remains: How should we organise our economy, and, more broadly, society, in a world where large swathes of human labour are beginning to be automated away? Put more simply, How will people live if they can’t get jobs because they were replaced by cost-effective, better-performing machines?
In just the last few years, numerous studies have been published, and institutes inaugurated, that are dedicated to studying which jobs of the future will remain in the hands of humans, and which will be doled out to the machines. For example, the 2013 Oxford Report on The Future of Employment attempted to describe what categories of jobs would be safe from automation and which are at greatest risk to it. The study went much further than that and attempted to place probabilities on how “computerisable” various jobs are. The Oxford study, as well as many subsequent ones, generally argue that creative jobs, like artists and musicians, are less likely to be automated. Yet, we live in a world where the first AI-generated painting was sold at Sotheby’s for nearly $500,000. And The Verge published an article about how AI-Generated Music is Changing the Way Hits are Made.
So coming back to cybersecurity ― just what does the AI-hype mean for the industry? While there are no clear-cut rules for which types of cognitive and manual-labour jobs will be replaced, what we can say is that the recent application of advanced AI and machine learning techniques in the field of cybersecurity is highly unlikely to put security analysts out of work.
Understanding why requires an appreciation for the complexity of cybersecurity and the current state of AI. Advanced attackers constantly develop novel methods to attack networks and computer systems. Moreover, these networks and the devices connected to them are constantly evolving, from running new and updated software all the time, to adding new types of hardware as technology progresses.
The current state of AI on the other hand ― while advanced ― performs a lot like the human perceptual system. AI methods can process and recognise patterns in streams of incoming data, much like the human eye processes incoming visual input and the ear processes incoming acoustic input. However, they aren’t yet capable of representing the full breadth of knowledge an experienced system administrator has, neither about the networks they are administering, nor the complex web of laws, corporate guidelines, and best practice that govern how best to respond to an attack. Simply put, AI systems will never have a full understanding of the “context”.
The development of the calculator did not reduce the need for people to understand mathematics but instead greatly expanded the scope and possibilities of what could be computed – and consequently the need for people with mathematical understanding to explore those possibilities. Similarly, AI is just a tool that expands the scope and possibilities of detecting attacks that would otherwise have been undetectable. Don’t believe me? Just try looking at a high-frequency, multi-dimensional time-series of encrypted traffic and determine whether that traffic is an attack or benign.
For the foreseeable future, AI will simply remain a tool in a defender’s pocket, making it possible to detect, and therefore respond to, ever evolving advanced attacks at speeds and scale hitherto practically unattainable.