A panel of top cyber security researchers discuss the biggest security threats in the coming years, from autonomous weapons and vehicles, to IoT devices
Three leading cyber security researchers sat down for a panel at the IP Expo in London yesterday to air their catastrophic predictions about the biggest infosec threats in the coming years.
Nightmare scenarios regarding autonomous killer robots, terrorists taking control of fleets of autonomous vehicles, AI writing its own malware, and the threat posed by IoT devices all quickly followed.
Rik Ferguson, global VP of security research at Trend Micro started off by stating that an area “ripe for innovation in the security and criminal landscape” is artificial intelligence and machine learning.
“One thing I find scary is the fact that we have a petition to the UN from 120 leading academics to outlaw autonomous weaponry,” he said. “We are already in Skynet, that is the world we live in. So I have no doubt attackers will start using AI to build autonomous attack machinery online, as well as physical autonomous weaponry.”
Ferguson pointed out that although there is much good work that can be done on open machine learning platforms from the likes of Amazon Web Services and Google’s TensorFlow, these could also be exploited for criminal misuse.
James Lyne, global security advisor at Sophos, agreed, pointing to the specific threat of “metamorphic malicious code with a broad set of instructions”.
“This will try to leverage the infrastructure to achieve its goal to reach a scary level of collateral damage to what we have today,” Lyne said. “We are on a path to do that in the current world of legitimate tech for legitimate reasons, while still fighting over freedom and encryption. So we get to that technology level, and the opportunities it creates for attackers, long before we have got to protections.”
Mikko Hypponen, chief research officer at F-Secure went on to discuss the implications this will have for autonomous weapons – killer robots to you and I.
“[It’s] clear that these autonomous robots will become a reality when you think it through,” Hypponen said. “Look at drones, they may not be autonomous, they are probably operated by someone in Nevada to shoot someone in Syria, but the obvious weakness is the link from drone to human, as that link can be disrupted or cut or spied on.
“So removing that weakness is simple: make the drone smart enough to work without the human. It’s going to happen and it is scary as hell.”
Ferguson didn’t just come armed with warnings, as he urged the security and IT industry to “see that coming” and “make sure we build our own toolsets to harness that capability of AI and ML and set it loose as an ethical, autonomous hacker. This will operate at speed and scale unlike we have seen before, so we need to see this coming.”
Ferguson said that it is the responsibility of the security industry to deploy these AI and ML tools as “a new iteration of chaos monkey” – essentially as a way of staying ahead of the criminals.
Moving onto the threat IoT devices pose, Ferguson started out by stating that the manufacturers and the people “who are good at security” aren’t in the same companies.