Chinenye Anuforo

Artificial intelligence (AI) is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation.

As automation becomes increasingly sophisticated, there is no question that AI is in the process of disrupting people’s day-to-day jobs. As a result, the buzz has largely focused on whether AI will put people out of work versus whether it will shift work to more productive tasks, as automation takes the grunt work off of everybody’s plate.

A report recently written by artificial intelligence experts from industry and academia has a clear message: Every AI advance by the good guys is an advance for the bad guys, too.

The paper, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” calls this the “dual-use” attribute of AI, meaning the technology’s ability to make thousands of complex decisions every second could be used to both help or harm people, depending on the person designing the system.

The experts considered the malicious uses of AI that either currently exists or could be developed over the next five years, and broke them out into three groups: digital, physical, and political.

Here is a selected list of the potential harms:

Digital

•Automated phishing, or creating fake emails, websites, and links to steal information.

•Faster hacking, through the automated discovery of vulnerabilities in software.

•Fooling AI systems, by taking advantage of the flaws in how AI sees the world.

Physical

•Automating terrorism, by using commercial drones or autonomous vehicles as weapons.

Related News

•Robot swarms, enabled by many autonomous robots trying to achieve the same goal.

•Remote attacks, since autonomous robots wouldn’t need to be controlled within any set distance.

Political

•Propaganda, through easily-generated fake images and video.

•Automatic dissent removal, by automatically finding and removing text or images.

•Personalised persuasion, taking advantage of publicly-available information to target someone’s opinions.

The report stated that, “AI will alter the landscape of risk for citizens, organisations and states whether it is criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression.”

It warned that systems based on AI often “significantly surpass” human performance.

“It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persua- sion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour,” said th report.

The experts noted that they expect “novel cyber attacks” using tools such as automated hacking, speech synthesis, and targeted emails based on personal information found on social media.

“Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom,” they said.

The rise of autonomous weapons systems in conflicts risks “the loss of meaningful human control,” while detailed political analytics, targeted propaganda and fake videos “present powerful tools for manipulating public opinion on previously unimaginable scales. The ability to aggregate, analyse and act on citizen’s information at scale using AI could enable new levels of surveillance, invasion of privacy and threaten to radically shift the power between individuals, corporations and states,” the report said.