Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity — or perhaps exterminating us.
These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagreed, saying the technology is not nearly advanced enough for those worries to be realistic.
As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I’ve seen how beneficial it can be. I’ve developed AI software that lets robots working in teams make individual decisions, as part of collective efforts to explore and solve problems.
Researchers are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.
How is AI regulated now
While the term “artificial intelligence” may conjure fantastical images of humanlike robots, most people have encountered AI before.