ERP News

What Retail Should Learn from AI Gone Bad

455 0
Retail AI

Retail AI

Retail AI-I have a new hobby. It is collecting stories about AI going off the rails. It is both amusing and somewhat horrifying, as you dig into these stories, but I feel that when AI goes wrong, it tells us more about ourselves than it tells us anything about the AI that went to dark places.

Some of my favorite examples of AI going wrong:

  • The DARPA cannibals – Mike Sellers, a researcher for DARPA shares a story (scroll down to find it) about an Adam and Eve model where they ended up eating a new character, Stan, when he was introduced to the simulation. What they learned:
    • You have to have enough detail to make the simulation at least reasonably representative. In the simulation, Adam and Eve ate all the food that was available and were still hungry.
    • You have to lay down some rules, basic rules, like “people are not food” (unless your simulation is trying to explore a very dark side of humanity, in which case, AI learns to eat people pretty quickly).
  • The Breeding Cannibals – where bad programming that said that survival required energy but reproducing did not led to a species that only expended energy on mating, reproducing and… eating the children.
    • Reproducing is not an energy-free activity, so that needed to be fixed. And…
    • Don’t eat people.
  • The People Zoo – a robot designed to interact via voice with people, who, when asked if it thought robots would take over the world, the robot replied, “Don’t worry. I’ll keep you warm and safe in my people zoo.” (Number 5 in a list of 9 that are all pretty good)

There are many more along those lines. But don’t imagine that these “off the rails” AI are only limited to research projects, or only limited to the early days of AI. Some more recent, and for businesses, more ominous examples:

Then, of course, there are the generative AI’s that create creepy things. My favorite is InspiroBot, which tries to design inspirational posters by combining an image and some line of text, and ends up somehow always sounding vaguely menacing. And don’t forget @EndlessJeopardy on Twitter, a bot that generates a Jeopardy-like trivia answer every hour, and awards points to the submitted questions that receive top votes from Twitter users. Sometimes the answers have a bit of a bot-generated feel, but usually that means even more creative questions submitted by followers.

Read More Here

Article Credit: Forbes

Leave A Reply

Your email address will not be published.