Fooling Deep Neural Networks

A video summary of the paper: Nguyen, Anh, Jason Yosinski, and Jeff Clune. “Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” arXiv preprint arXiv:1412.1897 (2014). The paper is available here.

From MIT Technology Review: “A technique called deep learning has enabled Google and other companies to make breakthroughs in getting computers to understand the content of photos. Now researchers at Cornell University and the University of Wyoming have shown how to make images that fool such software into seeing things that aren’t there. The researchers can create images that appear to a human as scrambled nonsense or simple geometric patterns, but are identified by the software as an everyday object such as a school bus. The trick images offer new insight into the differences between how real brains and the simple simulated neurons used in deep learning process images” (December 24, 2014).

Connections: Learning everything about anything. Also: Google researchers have developed software that can match complex images with simple sentences describing whole scenes, rather than just objects, e.g. “a group of young people playing a game of frisbee.”