Profile

Cover photo
Ran Manor
Attends Ben-Gurion University of the Negev
431,655 views
AboutPostsPhotosYouTubeReviews

Stream

Ran Manor

Discussion  - 
 
tl;dr - how to force network learn "low-level" features?

I'm working on a convolutional neural network on some biological data. The data has some variance between subjects, so up until now I trained a network per subject.
Now I want to try a network on multiple subjects, assuming that the network will learn some common features between the subjects and so the extra data will improve my classification performance.
Unfortunately, that doesn't happen.
The performance I get from multiple subjects is very low.
My net is deep, 3 convolutional layer, 2 fully connected layers.
Is there any trick to force the net to learn low level features better?
Thanks.
3
Siavash Sakhavi's profile photoRan Manor's profile photoPaco Zamora Martínez's profile photo
21 comments
 
+Ran Manor thinking about the ability of the network to learn fft transformation, I have the intuition a layer with sin activation function would be necessary imho. I feel that logistic, relu, tanh, etc cannot learn fft filters. But I never tried this idea ;-)
Add a comment...

Ran Manor

Discussion  - 
 
*The function of stride in convolutional neural networks"

I often see in implementations of CNNs that there is a small amount of stride in the convolution layer and in the pooling layer. 
I understand that the stride helps to reduce the dimension and it skips nearby samples which are usually highly correlated.
My question is, why is there stride in both the convolution and the pooling? why not just in one of them?
Can someone give me a better intuition behind it?

Thank you.
1
Ran Manor's profile photoAlexandre Dalyac's profile photo
6 comments
 
pleasure :) 
Add a comment...

Ran Manor

"Check out my app!"  - 
 
 
Easy Notifications is a new notification app with the missing toggles for a...
1 comment on original post
1
Add a comment...
 
I'm trying a supervised neural network on data that has two unbalanced classes, class 0 is 90% of the data and class 1 is 10%. I replicate the smaller class for training so the gradients won't go only to one direction. It works relatively fine and the network performance is almost balanced. I'm training using standard gradient descent with a fixed learning rate and my network has sigmoid units. 
I've noticed that if I start using momentum or rectified linear units (separately) then the network performance starts to skew towards class 0 (the bigger class).
This is a weird effect and I don't have a good idea on how to explain it.
Any ideas?
Thanks.
8
Hou Yunqing's profile photoPeter Speckmayer's profile photoJohn Taylor's profile photoPaco Zamora Martínez's profile photo
14 comments
 
We have been also working with umballanced binary tasks. One possibility to improve results is to change the loss function by one which takes into account precision/recall, as the F1-score or similar loss functions. Approximations of the gradient for this kind of measures is possible, and it leads to better F1 performance in test set. In case you are interested, here are the slides of our work.

http://www.slideshare.net/franciscozamoraceu/iwann2013
Add a comment...
 
Is there a source code available for using hmm with a neural network for speech recognition?
I couldn't find anything and the papers weren't very clear to me.
Thanks. 
1
Yifan Zhang's profile photo
 
You can find the source code for KALDI here:
http://kaldi.sourceforge.net
and some other NN implementation here:
http://www.cs.cmu.edu/~ymiao/kaldipdnn.html
Add a comment...

Ran Manor

Shared publicly  - 
 
Just say no
בעקבות חשיפת NEXTER אמש (א'), כי צה"ל וגופי ביטחון נוספים אוסרים על אנשיהם להשתתף בפיילוט המאגר הביומטרי - שוב מעלים המתנגדים של הפיילוט את הטענה כי מדובר באיסוף נתו
1
ido david's profile photoRan Manor's profile photo
3 comments
 
+ido david דרך אגב, אמרת בעצמך כמה false positive יש במידע ביומטרי. עכשיו תחשוב שהמשטרה מוצאת טביעות אבצעות בזירת פשע ומחפשת טביעות דומות במאגר. בא לך ללכת לחקירה כי הטביעת אצבע שלך קצת דומה למה שמצאו?
 ·  Translate
Add a comment...

Ran Manor

Shared publicly  - 
 
Funny 
Piotr Michael's Celebrity Impressions Public Service Announcement for Academy Award Nominees. Featuring impressions of: Maggie Smith Charlie Sheen Jeff Bridg...
1
Add a comment...
 
Should be interesting!
 
Geoff Hinton is doing a reddit AMA on Monday:

http://redd.it/2lmo0l
I design learning algorithms for neural networks. My aim is to discover a learning procedure that is efficient at finding complex structure in...
View original post
4
1
Shudong Hao's profile photo
Add a comment...

Ran Manor

Shared publicly  - 
 
ח"כ, סגן ראש עירייה ועובד בכיר בעירייה מסבירים למה פוליטיקאים חייבים לשרת קבוצות אינטרס
1
Add a comment...
 
Learning rate advice

How should the learning rate change per layer in a deep neural network? Should it be lower in the layers near the input and higher in the layers near the output?

Currently I have a [convolutional] neural network which works quite nice with two layers (actually one hidden and an output layer). When I add another fully connected layer then all performance drop and it seems that the final loss function value is much higher, i.e. worse.
I thought that a variable learning rate might be one way to help, in other advices are welcomed. :)

Thanks.
2
Ran Manor's profile photoSander Dieleman's profile photoDan Ryan's profile photoDaniel Povey's profile photo
8 comments
 
The reason I did it is that I measured the amount of parameter change in the different layers, and noticed that the parameter change was much larger in the last layer (and also the second-to-last layer).  And decreasing the learning rate for those layers seemed to help.
Add a comment...

Ran Manor

Shared publicly  - 
 
 
העמלה השערורייתית השבועית: עמלת תדפיס לבקשת לקוח. גם אתם לא מבינים למה עולה יותר משקל להדפיס עבורכם נייר שמציג את מצב החשבון שלכם?

ככה זה כשאין תחרות. זו הסיבה שאנחנו מקימים את 'אופק' - הבנק הראשון בישראל בו הלקוחות יהיו גם הבעלים.

הבנק הראשון בישראל שיעבוד בשביל הלקוחות שלו ויחפש לתת להם עוד - לא כססמא שיווקית אלא כדרך התנהלות עסקית.

עוד על אופק: http://bit.ly/1cxdqYh
הצטרפו לאופק: http://bit.ly/1hG6FaZ
 ·  Translate
View original post
1
Add a comment...
Work
Occupation
PhD Student
Skills
drums, guitar and machine learning.
Basic Information
Gender
Male
Apps with Google+ Sign-in
Story
Tagline
Piled higher and deeper
Introduction
PhD student with interest in machine learning and deep learning.
Education
  • Ben-Gurion University of the Negev
    Electrical & Computer Engineering, 2006 - present
Links
96 reviews
Map
Map
Map
Very good but not enough food for this price.
Public - 2 weeks ago
reviewed 2 weeks ago
Great food, large dishes.
Public - 2 weeks ago
reviewed 2 weeks ago