ERP News

AI

Should AI bots lie? Hard truths about artificial intelligence

918 0

AI will revolutionize the world, or so sayeth Silicon Valley. But there are some potholes on the road to AI nirvana — starting with the people AI is supposed to help. Think Skynet. Here’s research from the frontlines of artificial intelligence.

 AI bots

AI bots

People working in teams do things — such as telling white lies — that can help the team be successful. We accept that, usually, when a person does it. But what if an AI bot is telling the lie, or being told a lie?

More importantly, if we allow bots to tell people lies, even white lies, how will that affect trust? And if we do give AI bots permission to lie to people, how do we know that their lies are helpful to people instead of the bot?

Computer scientists Tathagata Chakraborti and Subbarao Kambhampati of Arizona State University, discuss effective collaboration between humans and AI in a recent paper, Algorithms for the Greater Good!. They point out that it’s not enough to make the AI smart. AI devs have to make sure the AI bot works well with human intelligence, in all its wild variety, including different cultural norms, if we are to avoid serious problems.

They frame the issue this way:

Effective collaboration between humans and AI-based systems requires effective modeling of the human in the loop . . . . However, these models [of the human] can also open up pathways for manipulating and exploiting the human. . . when the intent or values of the AI and the human are not aligned or when they have an asymmetrical relationship with respect to knowledge or computation power

If IBM, Intel, and Nvidia have their way there will be an ever growing “asymmetrical relationship with respect to knowledge or computation power.” A bot might have a couple of thousand drones surveying several square kilometers, or an exabyte of relevant history and context. Or both.

I, AI. YOU, MEAT PUPPET.

The researchers designed a thought experiment to explore, human-human, and human-AI interactions in an urban search and rescue scenario: searching a floor of an earthquake-damaged building. They enlisted 147 people on Mechanical Turk to survey how human reactions change between dealing with humans or AI.

The scenarios involved different kinds of influence, including belief shapingmodel differences, and stigmergic collaboration.

These aren’t theoretical issues. For example, a doctor’s Hippocratic oath includes a promise to conceal “. . . most things from the patient while you are attending to him.” This is done for the good of patient, but what if it is a medical AI that is concealing information from a patient, or a doctor?

There is a lot to the paper, but the issue I found most concerning is that many people are OK with lying to an AI and, likewise, OK with being lied to by an AI.

And conversely, many aren’t. How is an AI developer supposed to model THAT?

Read More Here

Article Credit: ZDNet

Leave A Reply

Your email address will not be published.

*

code