ERP News

Five Experts Share What Scares Them the Most About AI

329 0

Sophisticated AI could make the world a better place. It might let us fight cancer and improve healthcare around the world, or simply free us from the menial tasks that dominate our lives.

That was the primary topic of conversation last month when engineers, investors, researchers, and policymakers got together at The Joint Multi-Conference on Human-Level Artificial Intelligence.

But there was an undercurrent of fear that ran through some of the talks, too. Some people are anxious about losing their jobs to a robot or line of code; others fear a robot uprising. Where’s the line between fearmongering and legitimate concern?

In an effort to separate the two, Futurism asked five AI experts at the conference about what they fear most about a future with advanced artificial intelligence. Their responses, below, have been lightly edited.

Hopefully, with their concerns in mind, we’ll be able to steer society in a better direction — one in which we use AI for all the good stuff, like fighting global epidemics or granting more people an education, and less of the bad stuff.

Q: When you think of what we can do — and what we will be able to do — with AI, what do you find the most unsettling?

KENNETH STANLEY, PROFESSOR AT UNIVERSITY OF CENTRAL FLORIDA, SENIOR ENGINEERING MANAGER AND STAFF SCIENTIST AT UBER AI LABS

I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.

On how to develop safe AI:

All technology can be used for bad, and I think AI is just another example of that. Humans have always struggled with not letting new technologies be used for nefarious purposes. I believe we can do this: we can put the right checks and balances in place to be safer.

I don’t think I know what exactly we should do about it, but I can caution us to take [our response to the impacts of AI] very carefully and gradually and to learn as we go.

IRAKLI BERIDZE, HEAD OF THE CENTRE FOR ARTIFICIAL INTELLIGENCE AND ROBOTICS AT UNICRI, UNITED NATIONS

I think the most dangerous thing with AI is its pace of development. Depending how quickly it will develop and how quickly we will be able to adapt to it. And if we lose that balance, we might get in trouble.

Read More Here

Article Credit: Futurism

Leave A Reply

Your email address will not be published.

*

code