Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter
Posted by msmash on 26th December 2017
| 387 views

link to original article

An anonymous reader shares a Wired report: Algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms. While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They’ve already been used to beat other kinds of algorithms, like spam filters. Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT’s Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google’s Cloud Vision API, a machine learning algorithm used in the real world today. For example, in November another team at MIT (with many of the same researchers) published a study demonstrating how Google’s InceptionV3 image classifier could be duped into thinking that a 3-D-printed turtle was a rifle. In fact, researchers could manipulate the AI into thinking the turtle was any object they wanted.

Read more of this story at Slashdot.

Please disable your AdBlocker so our free service can continue delivering you breaking news, insightful analysis, and a collection of aggregated content that will keep you informed like no other.