Behrang's interests

Behrang's posts

Post has attachment

What makes a network focus on learning shapes and edges instead of textures (see figure for reconstructed output images on SVHN)?

How do we optimize the weights of a Siamese architecture? Do we find the weight of each sub-network separately and then add them?

I implemented a deep bottleneck auto-encoder with ReLU activation functions with NO pretraining. Based on visualizing weights, activation of hidden units and reconstruction error it seems that the auto-encoder is learning the correct features.

I used the middle layer (bottleneck) as inputs to SVM but I get 12% accuracy on MNIST. Is that the right way to use the features for classification and what do you think is the problem? How many middle layer units is needed for ten classes in data?

I used the middle layer (bottleneck) as inputs to SVM but I get 12% accuracy on MNIST. Is that the right way to use the features for classification and what do you think is the problem? How many middle layer units is needed for ten classes in data?

Why maximizing the classification score is used for visualizing the hidden units? What is the intuition?

How can a deep network disentangle complex variations such as rotations?

Is there any papers about that?

Is there any papers about that?

At which year did the research on neural networks stop exactly? And why did it stop? What was the state of the art at that time?

If we know the underlying manifold of the data:

- are the intrinsic dimensions orthogonal (uncorrelated), independent, or neither of them?

- Does it mean that starting form two different points on the manifold, going towards same direction, we are adding same variation to the data?

- are the intrinsic dimensions orthogonal (uncorrelated), independent, or neither of them?

- Does it mean that starting form two different points on the manifold, going towards same direction, we are adding same variation to the data?

Does ReLU nonlinearity satisfy the theory of universal approximation using neural networks?

Does "invariance" imply "independence"?

Is this true?

coarse coding is only efficient when having discrete unit activities. Otherwise, it just introduces redundancy.

coarse coding is only efficient when having discrete unit activities. Otherwise, it just introduces redundancy.

Wait while more posts are being loaded