ERP News

AI

There’s a glaring mistake in the way AI looks at the world

701 0

As humans, we’re pretty good at knowing what we’re looking at. We might be fleshy and weak compared to computers, but we string context and previous experience together effectively to understand what we see.

Artificial intelligence today doesn’t have that capability. The brain-inspired artificial neural networks that computer scientists have built for companies like Facebook and Google simply learn to recognize complex patterns in images. If it identifies the pattern, say the shape of a cat coupled with details of a cat’s fur, that’s a cat to the algorithm.

But researchers have found that the patterns AI looks for in images can be reverse-engineered and exploited, by using what they call an “adversarial example.” By changing an image of a school bus just 3%, one Google team was able to fool AI into seeing an ostrich. The implications of this attack mean any automated computer vision system, whether it be facial recognition, self-driving cars, or even airport security, can be tricked into “seeing” something that’s not actually there.

Read More Here

Article Credit: QUARTZ

 

Leave A Reply

Your email address will not be published.

*

code