OverFeat classifier / feature extractor + New detection state of the art on ILSVRC13.http://cilvr.nyu.edu/doku.php?id=software:overfeat:start
Today we are announcing the release of OverFeat
, our convolutional net-based image classifier and feature extractor.
We are also releasing new record-breaking results
on the ILSVRC13 detection task
OverFeat is a C library (with source code) that runs pre-trained ConvNets. It can be used to extract dense features in images or to classify objects from the ImageNet 1K categories. It is provided under a non-commercial license, together with example programs, demos, wrapper scripts, and two ConvNets trained on the ImageNet 1K dataset.
This release includes two trained ConvNets: (1) A large network which is accurate but slow, (2) a smaller network, which is faster but slightly less accurate. The large/accurate network yields 14.71% top-5 error rate on the validation set of the ILSVRC13 classification task (ImageNet 1K) when using Krizhevsky's 10 views averaging. It yields 14.18% when the network is densely applied at multiple scales and flips and combining the outputs with a voting mechanism (which is not provided, but easy to implement). Finally it reaches 13.24% error when averaging 7 similar models (13.6% on the test set).
The core of OverFeat is a C library provided with source code, but we also provide a Lua/Torch wrapper (with Python and Matlab wrappers coming soon). The scripts can be used to process a single image or a batch of images. The scripts can output the state of any layer in the ConvNet, including the output of the 1000-category classifier. Applying OverFeat to a large image will produce a map of feature vectors (or outputs) for regularly-spaced windows on the image.
Concurrently, we are releasing a paper containing a new record for mean average precision (mAP) on the 2013 ImageNet detection task:http://arxiv.org/abs/1312.6229
The paper describes the OverFeat system that participated in the ILSVRC13 and includes the following results:
- detection task: 24.3% mAP (post-competition). This establishes a new record (with UvA at 22.6%, NEC at 20.9%, and the pre-deadline version of OverFeat at 19.4%)
- localization task: 29.9% error (ranked 1st at the competition)
- classification task 13.6% top-5 error (ranked 5th at the competition)
If you use OverFeat in your research, please cite the paper mentioned above. The paper is submitted to ICLR14.
The authors are Pierre Sermanet , David Eigen, +Xiang Zhang
, Michael Mathieu. +Rob Fergus
, +Yann LeCun