ERP News

AI

AI is now so complex its creators can’t trust why it makes decisions

714 0

 

Artificial intelligence is seeping into every nook and cranny of modern life. AI might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchersare also beginning to use the technology for scientific discovery and space exploration.

But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

Modern artificial intelligence is still new. Big tech companies have only ramped up investment and research in the last five years, after a decades-old theory was shown to finally work in 2012. Inspired by the human brain, an artificial neural network relied on layers of thousands to millions of tiny connections between “neurons” or little clusters of mathematic computation, like the connections of neurons in the brain. But that software architecture came with a trade-off: Since the changes throughout those millions of connections were so complex and minute, researchers aren’t able to exactly determine what is happening. They just get an output that works.

At the Neural Information Processing Systems conference in Long Beach, California, the most influential and highest-attended annual AI conference, hundreds of researchers from academia and tech industry will meet today (Dec. 7) at a workshop to talk about the issue. While the problem exists today, researchers who spoke to Quartz say the time is now to act on making the decisions of machines understandable, before the technology is even more pervasive.

“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”

As these artificial neural networks are starting to be used in law enforcement, healthcare, scientific research, and determining which news you see on Facebook, researchers are saying there’s a problem with what some have called AI’s “black box.” Previous research has shown that algorithms amplify biases in the data from which they learn, and make inadvertent connections between ideas.

For example, when Google made an AI generate the idea of “dumbbells” from images it had seen, the dumbbells all had small, disembodied arms sticking out from the handles. That bias is relatively harmless; when race, gender, or sexual orientation is involved, it becomes less benign.

Read More Here

Article Credit: Quartz

Leave A Reply

Your email address will not be published.

*

code