Old demo of DrLIM (Dimensionality Reduction by Learning an Invariant Mapping) from 2006.

DrLIM is a "metric learning" criterion for training ML systems (including deep architectures) to produce an embedding. It can be applied to so-called "siamese architectures" in which two identical learning machines (sharing the same parameters) are shown two examples. When the examples are semantically similar (e.g. two portraits of the same person), the distance between the output vectors is decreased. When the examples are semantically distinct, the output vectors are pushed away with a force that decreases with distance (often a hinge).

Similar methods have become widely used in recent years for image search (series of papers on WSABIE by +Samy Bengio  and +Jason Weston) body pose estimation (papers by +Graham Taylor), and face recognition (see the recent Deep Face system from Facebook AI Research by +Yaniv Taigman et al.)

Video of a talk on the subject at a NIPS 2006 workshop: http://videolectures.net/lce06_lecun_lsmip/

Relevant papers: 
Raia Hadsell, Sumit Chopra and Yann LeCun: Dimensionality Reduction by Learning an Invariant Mapping, Proc. Computer Vision and Pattern Recognition Conference (CVPR'06), 2006

Sumit Chopra, Raia Hadsell and Yann LeCun: Learning a Similarity Metric Discriminatively, with Application to Face Verification, Proc. of Computer Vision and Pattern Recognition Conference, 2005.

J. Bromley, I. Guyon, Y. LeCun, E. Sackinger and R. Shah: Signature Verification using a Siamese Time Delay Neural Network, in Cowan, J. and Tesauro, G. (Eds), Advances in Neural Information Processing Systems (NIPS 1993), vol 6, 1993
Shared publiclyView activity