Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t.
Over on the Google Research blog, we take a look at some simple techniques for peeking inside these networks, yielding a qualitative sense of the level of abstraction that particular layers of neural networks have achieved in their understanding of images. This helps us visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training.
It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.