Google Brain’s Image Manipulation Algorithm Fools Both Humans and Machines

Pranav Dar Last Updated : 03 Mar, 2018
2 min read

Overview

  • Researchers at Google Brain developed an algorithm that can fool both humans and machines
  • The algo changes the image only slightly, but it’s enough to trick us
  • In some instances, 10 out of 10 machines mis-identified the object in the image

 

Introduction

The dangers of AI have been well documented recently. This study from Google Brain will only add to that concern.

Researchers at Google Brain have developed an algorithm that can manipulate images in such a way that neither humans, nor machines, are able to identify the object in the picture correctly.

A deep convolutional network (CNN) algorithm was tested on a slightly manipulated picture of a cat. Incredibly, it mis-identified it as a dog. See the image below for reference – the left frame is an unmodified image of a cat, and the right frame is a slight tweak of the cat’s face; enough to fool the CNN.

More importantly (and disconcertingly), humans were likewise fooled into thinking it was a dog.

Previously, it has been easy to trick CNNs into mis-identifying objects in images. The way to mess with them is to introduce a slight distraction in the image. It could be a wrongly placed pixel, white noise, etc. But these instances involved a single image classifier.

In this particular study, the developers at Google Brain created this model that can fool multiple systems by generating “adversarial” images. How did they do this? They added features that are “human meaningful”, like altering the edges of objects, playing around with the texture, altering the parts of the photo which enhanced the distraction of the object.

Some images managed to fool 10 out of 10 CNNs at a time!

You can read the research paper on the image manipulation here.

 

Our take on this

If humans are unable to tell the difference between a cat and a dog thanks to an algorithm, it’s time to take the regulating AI discussion a little more seriously. Experts have expressed concern that this technology could be misused – a politician enhancing his image on social media to look more appealing to the audience, advertisers using it to manipulate the biases in the human brain, etc.

However, this is still major progress in the AI field. On the positive side of things, it could be used for making boring photos (government announcements, traffic news, etc. come to mind) a bit more engaging to the audience.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details