Post by plutronus on Nov 10, 2018 6:16:52 GMT -6
Yes, AI Can Be Tricked, And it's a Serious Problem
Adversarial examples represent potentially very dangerous flaws in artificial intelligence systems that researchers are still working to understand and overcome.
By: Chris Wiltz, --Electronic Design News Magazine
November 09, 2018
November 09, 2018
By introducing imperceptible adversarial examples into an image of a pig, researchers were able to trick an image recognition system into believing it was looking at an airliner.
In 1985, famed neurologist Oliver Sacks released his book, The Man Who Mistook His Wife for a Hat. The titular case study involved a man with visual agnosia, a neurological condition that renders patients unable to recognize objects or, in this case, creates wild disassociations in objects (i.e., mistaking your wife's head for a hat). It's a tragic condition, but also one that offers neurologists deep insight into how the human brain works. By examining the areas of the brain that are damaged in cases of visual agnosia, researchers are able to determine what structures play a role in object recognition.
While artificial intelligence hasn't reached the levels of sophistication of the human brain, it is possible to approach AI research in the same way. Would you ever mistake a pig for an airliner? What about an image of a cityscape for a Minion from "Despicable Me?" Have you ever seen someone standing up and thought they were lying down? Chances are, you haven't. But machine learning algorithms can make these sorts of mistakes where a human never would.
Giving the system “brain damage”—understanding where mistakes occur in the system—allows researchers to develop more robust and accurate systems. This is the role of what are called adversarial examples—essentially, optical illusions for AI.
Plenty of humans have been fooled by optical illusions or sophisticated magic tricks, but we go about our daily lives recognizing objects and sounds pretty accurately. But the same task of recognizing an image that may seem more than obvious to a human can actually trick AI due to tiny anomalies, or perturbations, in the image. Adversarial examples are imperceptible to the humans, but they can cause AI to make errors and miscalculations that no healthy human ever would. Thus, an image of a city street that looks normal to a human can contain hidden perturbations that make an AI system think it is looking at a Minion character. (The Facebook AI researchers demonstrated this exact case in a 2017 study.)
While artificial intelligence hasn't reached the levels of sophistication of the human brain, it is possible to approach AI research in the same way. Would you ever mistake a pig for an airliner? What about an image of a cityscape for a Minion from "Despicable Me?" Have you ever seen someone standing up and thought they were lying down? Chances are, you haven't. But machine learning algorithms can make these sorts of mistakes where a human never would.
Giving the system “brain damage”—understanding where mistakes occur in the system—allows researchers to develop more robust and accurate systems. This is the role of what are called adversarial examples—essentially, optical illusions for AI.
Plenty of humans have been fooled by optical illusions or sophisticated magic tricks, but we go about our daily lives recognizing objects and sounds pretty accurately. But the same task of recognizing an image that may seem more than obvious to a human can actually trick AI due to tiny anomalies, or perturbations, in the image. Adversarial examples are imperceptible to the humans, but they can cause AI to make errors and miscalculations that no healthy human ever would. Thus, an image of a city street that looks normal to a human can contain hidden perturbations that make an AI system think it is looking at a Minion character. (The Facebook AI researchers demonstrated this exact case in a 2017 study.)
To read the remainder of this interesting article, see: