Innovative software makes energy monitoring a breeze
A number of technology leaders have warned that those designing artificial intelligence (AI) systems need to do more to mitigate possible misuses of their technology.
The Malicious Use of Artificial Intelligence report highlights the possible threats of AI in the wrong hands, including drones turned into missiles, fake videos manipulating public opinion and automated hacking.
The report concentrates on areas of AI that are available now or likely to be available within five years, and calls for policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI, as well as best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
Shahar Avin, from Cambridge University's Centre for the Study of Existential Risk, outlined some of the scenarios where AI could turn ‘rogue’, including a malicious individual buying a drone and training it with facial recognition software to target a certain individual, bots possibly being automated or ‘fake’ lifelike videos for political manipulation, or hackers using speech synthesis to impersonate targets.
Miles Brundage, research fellow at Oxford University's Future of Humanity Institute, said: "AI will alter the landscape of risk for citizens, organisations and states - whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast.
"It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour."