ERP News

AI

Artificial intelligence doesn’t have to be evil. We just have to teach it to be good.

226 0

Sensational reports surfaced earlier this year about Google’s DeepMind AI growing “highly aggressive” when left to its own devices. Researchers at Google had AI “agents” face off in 40 million rounds of a fruit-gathering computer game. When apples grew scarce, the agents started attacking each other, killing off the competition—humanity’s worst impulses echoed ... or so the critics said.

Nor is it hard to find other examples of AI “learning” the wrong types of behavior, like Microsoft’s infamous Tay bot. Deployed on Twitter in early 2016, Tay was supposed to “learn” from user interactions. (“The more you talk, the smarter Tay gets,” boasted her profile.) But she was beset with racist, anti-semitic and misogynistic commentary, almost from the start. Learning from her environment, Tay began spitting out a string of inflammatory responses, including, infamously, “bush did 9/11, and Hitler would have done a better job than the monkey we have now.” Microsoft developers pulled the plug a mere 16 hours after Tay’s release.

This is a simple example. But herein lies the challenge. Yes, billions of people contribute their thoughts, feelings and experiences to social media every single day. But training an AI platform on social media data, with the intent to reproduce a “human” experience, is fraught with risk. You could liken it to raising a baby on a steady diet of Fox News or CNN, with no input from its parents or social institutions. In either case, you might be breeding a monster.

The reality is that while social data may well reflect the digital footprint we all leave, it’s neither true to life nor necessarily always pretty. Some social posts reflect an aspirational self, perfected beyond human reach; others, veiled by anonymity, show an ugliness rarely seen “in real life.”

Ultimately, social data — alone — represents neither who we actually are nor who we should be. Deeper still, as useful as the social graph can be in providing a training set for AI, what’s missing is a sense of ethics or a moral framework to evaluate all this data. From the spectrum of human experience shared on Twitter, Facebook and other networks, which behaviours should be modelled and which should be avoided? Which actions are right and which are wrong? What’s good ... and what’s evil?

Read More Here

Article Credit: Recode

Facebook Comments

Leave A Reply

Your email address will not be published.

*

code