Technology is not enough to keep an organisation cyber secure. Business leaders should also consider the human element as even the most tech-savvy professional can fall victim to a social engineering attack.
The Stuxnet worm, which infected multiple Iranian computer systems from 2010 onwards, is thought to have arrived at its key target – a nuclear plant – on an infected USB device.
A 500kb digital spanner in the works, Stuxnet caused scores of centrifuges at the Natanz facility to run abnormally and to fail, hampering nuclear-enrichment efforts.
So significant was the attack that it inspired a book, Countdown to Zero Day: Stuxnet and the launch of the world’s first digital weapon.
Although the book calls Stuxnet the world’s first digital weapon, no one would describe it as the last.
It is part of the wider phenomenon of social engineering, a type of information security attack that takes advantage of a mistake by an individual to circumvent security safeguards.
While some attacks, like Stuxnet, find their way in through an infectious USB stick, others begin when an employee opens unsolicited emails containing links or downloads that insert malware.
The scale of the problems social engineering causes is daunting. Reports citing the United States’ Federal Bureau of Investigation indicate that globally between October 2013 and December 2016 there were more than 40,000 email account compromises (when a legitimate email account is taken over in order to send messages) and business email compromises (when attackers use an identity familiar to the victim to get data or money). These 40,000-plus attacks cost businesses more than $5 billion.
So, although computer systems and threats evolve, social engineering, a malign presence since the 1990s (and dating back centuries if non-computer forms are included), shows no signs of disappearing.
“As the platforms change, people don’t,” says John Clark, professor of computer and information security at The University of Sheffield in the United Kingdom.
According to Dr Markus Jakobsson, chief scientist of Agari, a California-headquartered email security provider, today “more and more attacks are targeted”.
“The reason for that is simple: these attacks are much more successful i.e. result in a higher yield,” he says.
Among such targeted attacks, Jakobsson says identity deception is always part of the strategy used by attackers. But the form that such deception takes has changed “rather dramatically” recently.
“Just about a year ago, 48 percent of all targeted attacks used spoofing, which is when an attacker inserts fake mail in a corrupted mail server. These emails look perfect to the recipient,” he explains.
By scrutinising headers, security systems can detect this type of abuse using an open standard called Domain-based Message Authentication, Reporting and Conformance (DMARC).
“Because the roll-out of DMARC has been so successful, attackers have abandoned this method in droves, with only six per cent of all targeted attacks using spoofing these days,” says Jakobsson.
Instead of spoofing, Jakobsson says the new favourite method among attackers is “deceptive display names”.
Such attacks involve an attacker creating a free webmail account, such as gmail or hotmail, with a deceptive display name, such as that of someone the recipient knows. Just over four-fifths of targeted attacks today use this method, according to Jakobsson.
While DMARC is not effective against such attacks, there are security controls that Agari and its competitor companies produce that can detect deceptive display names.
“This, of course, will put pressure on the attackers, and the $64,000 question is, of course, ‘Where will they go next?’” adds Jakobsson.
Another trend has been the growth in account take-over attacks (ATOs), which have increased in frequency by 300 per cent in recent months.
“In these attacks, a user’s account is taken over – most commonly, phished – and then used by the criminals,” explains Jakobsson.
“Cunning criminals use these corrupted accounts to target people [who] the ‘launchpad’ user knows, which they can tell based on the email history and the contact list of the corrupted account.
“An email from a corrupted account, of course, is terrifying: traditional security controls have no chance against these attacks and most users do not realise it either, especially when the social engineering part of this last step of the attacks is smooth and convincing.”
So with security controls sometimes able to do nothing, institutions must rely on the alertness of their employees to prevent attacks. But, according to researchers, one reason why organisations still fall victim as often as they do is that they are not spending enough time and money training staff to be aware of them.
“Like many things around security, publicity and recognition of the problem doesn’t necessarily lead to action,” says Steve Furnell, a professor of information security at Plymouth University in the United Kingdom and editor-in-chief of the journal Information and Computer Security.
“The most prevalent form of social engineering is phishing, but how many organisations actively promote related awareness raising or conduct practical vulnerability assessments with mock phishing tests? Relatively few.”
Organisations like dealing with security problems that can be tackled by deploying technology, says Furnell, but it is human interventions that are most effective against social engineering attacks.
A recent EY Global Information Security Survey found that the top area of vulnerability was “careless or unaware employees”, but Furnell says efforts to address this “appear to be continually lacking”.
This has been a long-standing concern. A decade ago, Furness co-authored a white paper about social engineering for the European Network and Information Security Agency, but he says that many of the issues around lack of awareness that this highlighted remain true today.
“Most people are not naturally attuned to the threats they face and so without support, they will continue to represent a directly exploitable area of vulnerability,” he adds.
While the ability of fraudsters to trick people through social engineering appears not to have changed, even if attack methods have evolved, today’s increasingly connected world could be creating new vulnerabilities.
As Clark says, we are less used to seeing social engineering attacks that directly affect the likes of manufacturing or engineering-oriented services, but this is likely to change.
There was, of course, the 2010 attack that affected the Iranian nuclear facility. Another example was a 2014 incident that affected a German steel mill. This was a “spear phishing” attack: an email arrived that appeared to have been sent from an account familiar to the plant, but which in fact contained malware. It made its way from the office software network to the production management software, allowing it to take charge of the control systems, affecting, for example, a blast furnace and causing significant damage.
“Anyone in a doctors’ surgery or steel plant can contract malware to their local system,” says Clark.
“If you ask the question, ‘What’s the damage that can be done?,’ up to now it’s denial of service or disrupted data.
“If the recipient of an email or whatever resides in a process plant, it’s feasible the damage could be physical. We’re now seeing the advent of the cyber-physical system.”
As a result, Clark says the consequences of attacks could today be much more serious, with state-orchestrated cyber attacks likely to use these methods.
“The shift will be from the compromising of data to compromising of physical machinery and what’s around it. It raises not only security concerns, but safety concerns,” says Clark.
It perhaps emphasis that training staff to be wise to the risk of social engineering attacks is going to become ever more important.