Scientists do not only solve given problems. They also invent problems. Artificial scientists should do so, too. PowerPlay [1,2] does.
Consider the infinite set of all computable descriptions of problems with possibly computable solutions. Given a general problem-solving architecture, at any given time, PowerPlay searches the space of possible pairs of new problems and modifications of the parameters of the current problem solver, until it finds a more powerful solver that (unlike its unmodified predecessor) provably solves the new problem, without performing worse on previous self-invented problems.
By design, PowerPlay continually comes up with the fastest to find, initially novel, but eventually solvable problems. It also continually simplifies or compresses or speeds up solutions to previous problems. The computational costs of validating newly invented problems and skills need not grow with skill repertoire size. (The framework also allows for additional user-defined problems.)
Our PowerPlay implementation  based on self-delimiting recurrent neural networks (SLIM NN)  automatically self-modularises. It frequently re-uses code for previously self-invented skills, always trying to invent novel problems that can be quickly validated because their solutions do not require too many parameter changes affecting too many previous problems.
PowerPlay may be viewed as a greedy implementation of the Formal Theory of Creativity .
PowerPlay’s ongoing search for novel problems and skills keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories (1931) based on adding formerly unprovable statements to the axioms without affecting previously provable theorems .
BTW, this is much more general than traditional Deep Learning .
 J. Schmidhuber. PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013 http://www.frontiersin.org/cognitive_science/10.3389/fpsyg.2013.00313/abstract
 Rupesh K. Srivastava, Bas R. Steunebrink, J. Schmidhuber. First Experiments with PowerPlay. Neural Networks, 2013. http://www.sciencedirect.com/science/article/pii/S0893608013000373
 J. Schmidhuber. Self-Delimiting Neural Networks (2012) http://arxiv.org/abs/1210.0118
 Formal Theory of Fun & Creativity & Curiosity & Intrinsic Motivation Explains Science, Art, Music, Humor - key papers from 1990, 1991, 1995, 1997, 2002, 2006, 2007-2013 under http://www.idsia.ch/~juergen/creativity.html
 Deep Learning since 1991 - our deep NN have, so far, won 9 important contests in pattern recognition, image segmentation, object detection - deeplearn it! www.deeplearning.it#reinforcementlearning#artificialintelligence#machinelearning