ADVERTISEMENT

Machine Vision’s Achilles’ Heel Revealed by Google Brain Researchers

Apr 18, 2016
20
0
1
Machine Vision’s Achilles’ Heel Revealed by Google Brain Researchers

One of the most spectacular advances in modern science has been the rise of machine vision. In just a few years, a new generation of machine learning techniques has changed the way computers see.

Machines now outperform humans in face recognition and object recognition and are in the process of revolutionizing numerous vision-based tasks such as driving, security monitoring, and so on. Machine vision is now superhuman.

But a problem is emerging. Machine vision researchers have begun to notice some worrying shortcomings of their new charges. It turns out machine vision algorithms have an Achilles’ heel that allows them to be tricked by images modified in ways that would be trivial for a human to spot.
adversarial-images.jpg


These modified pictures are called adversarial images, and they are a significant threat. “An adversarial example for the face recognition domain might consist of very subtle markings applied to a person’s face, so that a human observer would recognize their identity correctly, but a machine learning system would recognize them as being a different person,” say Alexey Kurakin and Samy Bengio at Google Brain and Ian Goodfellow from OpenAI, a nonprofit AI research company.



High Resolution Telecentric Lens
Industrial zoom lens
Mvotem
Machine Vision Zoom Lens Optic System
Spherical optical lens
Machine Vision Telecentric Len
Double Telecentric lens
Bi telecentric lens
Because machine vision systems are so new, little is known about adversarial images. Nobody understands how best to create them, how they fool machine vision systems, or how to protect against this kind of attack.

Today, that starts to change thanks to the work of Kurakin and co, who have begun to study adversarial images systematically for the first time. Their work shows just how vulnerable machine vision systems are to this kind of attack.

The team start with a standard database for machine vision research, known as ImageNet. This is a database of images classified according to what they show. A standard test is to train a machine vision algorithm on part of this database and then test how well it classifies another part of the database.

The performance in these tests is measured by counting how often the algorithm has the correct classification in its top 5 answers or even its top 1 answer (its so-called top 5 accuracy or top 1 accuracy) or how often it does not have the correct answer in its top 5 or top 1 (its top 5 error rate or top 1 error rate).

One of the best machine vision systems is Google’s Inception v3 algorithm, which has a top 5 error rate of 3.46 percent. Humans doing the same test have a top 5 error rate of about 5 percent, so Inception v3 really does have superhuman abilities.

Kurakin and co created a database of adversarial images by modifying 50,000 pictures from ImageNet in three different ways. Their methods exploit the idea that neural networks process information to match an image with a particular classification. The amount of information this requires, called the cross entropy, is a measure of how hard the matching task is.

Their first algorithm makes a small change to an image in a way that attempts to maximize this cross entropy. Their second algorithm simply iterates this process to further alter the image.

These algorithms both change the image in a way that makes it harder to classify correctly. “These methods can result in uninteresting misclassifications, such as mistaking one breed of sled dog for another breed of sled dog,” they say.

Their final algorithm has much cleverer approach. This modifies an image in way that directs the machine vision system into misclassifying it in a specific way, preferably one that is least like the true class. “The least-likely class is usually highly dissimilar from the true class, so this attack method results in more interesting mistakes, such as mistaking a dog for an airplane,” say Kurakin and co.

They then test how well Google’s Inception v3 algorithm can classify the 50,000 adversarial images.

The two simple algorithms significantly reduce the top 5 and top 1 accuracy. But their most powerful algorithm—the least-likely class method—rapidly reduces the accuracy to zero for all 50,000 images. (The team do not say how successful the algorithm is at directing misclassifications.)

That suggests that adversarial images are a significant threat but there is a potential weakness in this approach. All these adversarial images are fed directly into the machine vision system.

But in the real world, an image will always be modified by the camera system that records the images. And an adversarial image algorithm would be useless if this process neutralized its effect. So an important question is how robust these algorithms are to the transformations that take place in the real world.

To test this, Kurakin and co print out all the adversarial images along with the originals and photograph them by hand with a Nexus 5 smartphone. They then feed these transformed adversarial images into the machine vision system.

Kurakin and co say that the least-likely class method is the most vulnerable to these kinds of transformations but that the others bear up reasonably well. In other words, adversarial image algorithms really are a threat in the real world. “A significant fraction of adversarial images crafted using the original network are misclassified even when fed to the classifier through the camera,” say the team.

That’s interesting work that throws some important light on machine vision’s Achilles’ heel. And there’s plenty of work ahead. Kurakin and co want to develop adversarial images for other kinds of vision systems and make them even more effective.

All this will raise some eyebrows in the computer security community. Machine vision systems are now better than humans at recognizing faces, so it’s natural to expect them to be used to for everything from unlocking smartphones and front doors to passport control and bank account biometrics. But Kurakin and co raise the prospect of fooling these systems with ease.

In the last couple of years we’ve learned a lot about how good machine visions systems can be. Now we’re just finding out how easily they can be fooled.

Src
Machine Vision’s Achilles’ Heel Revealed by Google Brain Researchers
 
ADVERTISEMENT

Latest posts

ADVERTISEMENT