News

Hackers could exploit AI to mount attacks, experts warn

Malicious users will soon exploit the rapid advances in artificial intelligence to mount automated hacking attacks, a new report warns. 

The report claims that AI could lead to driverless car crashes, or result in commercial drones being converted into targeted weapons.

This could lead to driverless car crashes, or result in commercial drones being converted into targeted weapons, the study adds.

The report was published on Wednesday by 25 technical and public policy researchers from Cambridge, Yale and Oxford universities, along with privacy and military experts, and has sounded the alarm for the potential misuse of AI by criminals and lone-wolf attackers.

The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years.

“We all agree there are a lot of positive applications of AI,” said Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute, according to a Reuters report. “But there was a gap in the literature around the issue of malicious use.”

The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labour and expertise. New attacks may arise that would be impractical for humans alone to develop or which exploit the vulnerabilities of AI systems themselves.

It reviews a growing body of academic research about the security risks posed by AI and calls on governments and policy and technical experts to collaborate and defuse these dangers.

The researchers detail the power of AI to generate synthetic images, text and audio to impersonate others online, in order to sway public opinion, noting the threat that authoritarian regimes could deploy such technology.

The report makes a series of recommendations including regulating AI as a dual-use military/commercial technology.

It also asks questions about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines