Hey everyone. I've been playing around with MNIST, with a simple one-layer net, in the spirit of Adam Coates' 2011 Aistats paper. There they argue whitening provides the most significant improvement to the performance. I found however, that with MNIST whitening really messed up everything. I went down from .96 accuracy to around .7. It might be the specific type of whitening, not sure. I'm using sklearn's PCA decomposition with whitening, and then training an autoencoder on the transformed data.

Any thoughts?

Thanks in advance.
Shared publiclyView activity