The abstract of DeepMind's recent publication in Nature  on learning to play video games claims: "While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces.” It also claims to bridge "the divide between high-dimensional sensory inputs and actions.” Similarly, the first sentence of the abstract of the earlier tech report version  of the article  claims to "present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning.”
However, the first such system  was created earlier at the Swiss AI Lab IDSIA, former affiliation of three authors of the Nature paper .
The system  indeed was able to "learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning” (quote from the abstract ), without any unsupervised pre-training. It was successfully applied to various problems such as video game-based race car driving from raw high-dimensional visual input streams.
It uses recent compressed recurrent neural networks  to deal with sequential video inputs in partially observable environments, while DeepMind's system  uses more limited feedforward networks for fully observable environments and other techniques from over two decades ago, namely, CNNs [5,6], experience replay , and temporal difference-based game playing like in the famous self-teaching backgammon player , which 20 years ago already achieved the level of human world champions (while the Nature paper  reports "more than 75% of the human score on more than half of the games”).
Neuroevolution also successfully learned to play Atari games .
The article  also claims "the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks”. Since other learning systems also can solve quite diverse tasks, this claim seems debatable at least.
Numerous additional relevant references can be found in Sec. 6 on "Deep Reinforcement Learning” in a recent survey . A recent TED talk  suggests that the system [1,2] was a reason why Google bought DeepMind, indicating commercial relevance of this topic.References
 V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller. Playing Atari with Deep Reinforcement Learning. Tech Report, 19 Dec. 2013, http://arxiv.org/abs/1312.5602
 V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis. Human-level control through deep reinforcement learning. Nature, vol. 518, p 1529, 26 Feb. 2015.http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html
 J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In Proc. Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, July 2013. http://people.idsia.ch/~juergen/gecco2013torcs.pdf
 J. Koutnik, F. Gomez, J. Schmidhuber. Evolving Neural Networks in Compressed Weight Space. In Proc. Genetic and Evolutionary Computation Conference (GECCO-2010), Portland, 2010. http://people.idsia.ch/~juergen/gecco2010koutnik.pdf
 K. Fukushima, K. (1979). Neural network model for a mechanism of pattern recognition unaffected by shift in position - Neocognitron. Trans. IECE, J62-A(10):658–665.
 Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel. Back-propagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989
 L. Lin. Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Carnegie Mellon University, Pittsburgh, 1993.
 G. Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215–219, 1994.
 M. Hausknecht, J. Lehman, R. Miikkulainen, P. Stone. A Neuroevolution Approach to General Atari Game Playing. IEEE Transactions on Computational Intelligence and AI in Games, 16 Dec. 2013.
 J. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, vol. 61, 85-117, 2015 (888 references, published online in 2014). http://people.idsia.ch/~juergen/deep-learning-overview.html
 L. Page. Where’s Google going next? Transcript of TED event, 2014https://www.ted.com/talks/larry_page_where_s_google_going_next/transcript?language=en#machinelearning#artificialintelligence#computervision#deeplearninghttp://people.idsia.ch/~juergen/naturedeepmind.html