Profile cover photo
Profile photo
Nick Sergievskiy
About
Nick's posts

Post has shared content
Here my third blog post in the "Deep Learning in a Nutshell" series. It gives a gentle introduction to sequence learning with a focus on natural language processing. #machinelearning   #deeplearning   #ai  

Post has shared content

Post has shared content

Post has shared content
A new near real-time pedestrian detection algorithm using deep learning which could be used in “smart” vehicles.

Post has shared content
Video tutorial for training a Convolutional Neural Network for flower classification using DIGITS 3 on Amazon EC2

http://www.learnopencv.com/deep-learning-example-using-nvidia-digits-3-on-ec2/
Photo

Post has attachment
Hello
I want to understand scale augmentation benefits. This is standard practice for Oxford and over labs. We test two scale augmentation approaches: probabilti fixed scale, random scale. https://github.com/ducha-aiki/caffenet-benchmark "Multiscale": https://github.com/ducha-aiki/caffenet-benchmark/blob/master/prototxt/augmentation/caffenet_lsuv_no_lrn_multiscale.prototxt
Scales 130, 144, 188, 256
"Base_dereyly 3x1 scale aug": https://github.com/ducha-aiki/caffenet-benchmark/blob/master/prototxt/contrib/caffenet128_1x3_sz_augm.prototxt
Scales from 128 to 300
Scale (fixed) augmentation drop accuracy from 0.47 to 0.462
and (random) drop from 0.553 to 0.530 (144 test resize) 0.512 (214 test resize)
0.553 is hard warp resized to 144x144, and augmentation tests are resized to 144xN or Nx144

This is benchmark on ImageNet with 128 crop size (Top1 acc)
Learning with BatchNorm 160K iterations with step 0.915 after each 2000

Any idea why scale augmentation not work in this test?
Mayby augmented data is harder and model underfit?


Post has shared content

Post has shared content

Post has attachment
PhotoPhotoPhotoPhotoPhoto
My ceramics
13 Photos - View album
Wait while more posts are being loaded