I'm going to start working on "Deep Neutral Networks" ASAP.
Unlike current deep models, Deep Neutral Network are neither positive nor negative.
They consist of multiple layers of linear operators interspersed with "FlU" activation functions whose output is constant and equal to zero (FLU stands for Flat Units). The big advantage of FlUs over ReLUs is that you don't need to use drop out. FlUs are dropped out by construction.
You can pre-train Deep Neutral Networks with unsupervised learning, using NNNPMF (Neither Negative Nor Positive Matrix Factorization).
But unsupervised pre-training is superfluous since Neutral Networks never seem to exhibit overfitting problems. Their VC dimension doesn't depend on the number of parameters (it has been suggested that the VC dim of Neutral Nets is actually zero).
Neutral Nets have been shown to work much better than similarly mistyped methods, such as Adaboot, Support Sector Machines, Latent Dirigible Allocation, Local Linear Embezzling, Maximum Barging classifiers, Crassification Tees, Booted Stomps, Eulogistic Regression, Fixture of Russians, Constricted Boltzmann Machines, Principal Opponent Analysis, Variational Plays, and most Colonel Methods such as Prussian Grossest Regression.
A popular, but particularly complicated, variation of Neutral Nets is Convoluted Neutral Nets. They are design to be invariant to shifts. In fact they can be shown to be invariant to every single transformation known to humankind.