Adversarial images can hack Human Brains, Can you believe this?

Adversarial images can hack Human Brains, Can you believe this? 1

Researchers at Google Brain developed a technology that tricks neural networks into misidentifying images and this hack works for human brains as well.

These images are called Adversarial images: an image specifically designed to fool neural networks into making an incorrect determination about what they’re looking at. Images are tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is.

Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they’re looking at something they aren’t.

Adversarial images Dog

In the image above, the picture on the left side is a cat but the picture on the right side, can you imagine it is a cat or similar looking dog? Actually, the difference between two is that the picture on the right side is tweaked by the algorithm and is an Adversarial image.

The recognition pattern of AI is different than the human brain. As human brain recognizes the objects with their properties and attributes, AI matches them with pixels and patterns. That is the reason CNN can be fooled very easily by these techniques.

Adversarial images Panda

In the above image, A CNN-based classifier is about 60 percent sure that the picture on the left is a panda. But if you slightly change (“perturb”) the image by adding what looks to us like a bunch of random noise (highly exaggerated in the middle image), that same classifier becomes 99.3 percent sure that it’s looking at a gibbon instead. The reason this kind of attack can be so successful with a nearly imperceptible change to the image is because it’s targeted at one specific computer model, and likely won’t fool other models that may have been trained differently.

But these examples tend to involve a single image classifier, each of which learned from a separate dataset. In the new study, the Google Brain researchers sought to develop an algorithm that could produce adversarial images that’s capable of fooling multiple systems. Furthermore, the researchers wanted to know if an adversarial image that tricks an entire fleet of image classifiers could also trick humans. The answer, it now appears, is yes.

Adversarial images Dog Cat

To test this Researchers developed an adversarial image generator that was capable of creating images that can fool CNN by 100%. To test its effectiveness on humans participants were shown an unmodified photo, an adversarial photo that fooled 100 percent of CNNs, and a photo with the perturbation layer flipped (the control). They don’t have much time to visually process the image, only between 60 to 70 milliseconds, after which time they were asked to identify the object in the photo.

In one of the example, a dog was looking like a cat—an adversarial image that was identified as a cat 100 percent of the time. Overall, it was harder for humans to distinguish objects in adversarial images than in unmodified photos, which means these photo hacks may transfer well from machines to humans.

“For instance, an ensemble of deep models might be trained on human ratings of face trustworthiness,” Evan Ackerman, Wrote on IEEE Spectrum. “It might then be possible to generate adversarial perturbations which enhance or reduce human impressions of trustworthiness, and those perturbed images might be used in news reports or political advertising.”

These techniques could also be used in positive ways, of course, and the researchers do suggest a few, like using image perturbations to “improve saliency, or attentiveness, when performing tasks like air traffic control or examination of radiology images, which are potentially tedious, but where the consequences of inattention are dire.” Also, “user interface designers could use image perturbations to create more naturally intuitive designs.” — Evan wrote

Image and Article Source : IEEE Spectrum