Profile cover photo
Profile photo
Boris Kazachenko
98 followers -
meta-cogito ergo sum
meta-cogito ergo sum

98 followers
About
Posts

Post has attachment

Post has attachment

Post has attachment

What’s wrong with CNNs: dirty similarity to coarse kernels.
CNN is the most successful method now, so I will use it to plug in my approach.

First, similarity measure in CNN is a product between input and kernel.
Why product? Input * kernel obviously overweights close vs. distant input-kernel pairs, for the same value of input + kernel. So, it does detect similarity, but in a dirty way: there is no differentiation between match and difference. Both are included in the product, although match is overrepresented. But difference obviously doesn’t belong in the value of similarity, these are mutually exclusive concepts. I think this accounts for “confirmation bias”, both in CNNs and in a human mind.
I suggest that match (similarity) should be a measure of compression, see part 1 in my intro: www.cognitivealgorithm.info

Second, both input and kernel are arrays, and kernel is formed by many instances of training. Thus, input-to-kernel comparison is far more coarse, thus potentially less selective and efficient than comparison between adjacent / consecutive pixels in my algorithm. Which accounts for abysmal scalability of CNNs.
There are many more theoretically justified distinctions, covered in the link above.

This is an open project, pseudocode: https://docs.google.com/document/d/1GElEvTlwwshZl1Z8KYetMoCPGxfJzq7zKC8G2rll160/edit?usp=sharing
But I am willing to pay for long-term collaboration, please make me an offer if interested.

What’s wrong with CNNs: dirty similarity to coarse kernels
CNN is the most successful method now, so I will use it to plug in my approach.

First, similarity measure in CNN is a product between input and kernel.
Why product? Input * kernel obviously overweights close vs. distant input-kernel pairs, for the same value of input + kernel. So, it does detect similarity, but in a dirty way: there is no differentiation between match and difference. Both are included in the product, although match is overrepresented. But difference obviously doesn’t belong in the value of similarity, these are mutually exclusive concepts. I think this accounts for “confirmation bias”, both in CNNs and in a human mind.
I suggest that match (similarity) should be a measure of compression, see part 1 in my intro: www.cognitivealgorithm.info

Second, both input and kernel are arrays, and kernel is formed by many instances of training. Thus, input-to-kernel comparison is far more coarse, thus potentially less selective and efficient than comparison between adjacent / consecutive pixels in my algorithm. Which accounts for abysmal scalability of CNNs.
There are many more theoretically justified distinctions, covered in the link above.

This is an open project, pseudocode: https://docs.google.com/document/d/1GElEvTlwwshZl1Z8KYetMoCPGxfJzq7zKC8G2rll160/edit?usp=sharing
But I am willing to pay for long-term collaboration, please make me an offer if interested.

Post has attachment

Post has attachment
Core algorithm adapted for image recognition: pseudocode
Level
1: comparison of consecutive pixels in horizontal scanline, evaluation of
resulting match, pattern formation: frame
(Y, F, line_P [F], p_[H], p_[H [F]]){ // unfolds
image p_[H [F]] into horizontal scanlines p_[H]; // _ as the last character distinguis...

Post has attachment

Post has attachment

Theoretical fringe: I propose starting with individual input comparison vs. weighted summation.
Non-neuro, sub-statistical deep learning: www.cognitivealgorithm.info  
 
Wait while more posts are being loaded