Ryan's posts

Post has shared content

Public

Interesting typo in a quote of mine from a recent press article: "neurally inspired" -> "neutrally inspired".

I'm going to start working on "Deep Neutral Networks" ASAP.

Unlike current deep models, Deep Neutral Network are neither positive nor negative.

They consist of multiple layers of linear operators interspersed with "FlU" activation functions whose output is constant and equal to zero (FLU stands for Flat Units). The big advantage of FlUs over ReLUs is that you don't need to use drop out. FlUs are dropped out by construction.

You can pre-train Deep Neutral Networks with unsupervised learning, using NNNPMF (Neither Negative Nor Positive Matrix Factorization).

But unsupervised pre-training is superfluous since Neutral Networks never seem to exhibit overfitting problems. Their VC dimension doesn't depend on the number of parameters (it has been suggested that the VC dim of Neutral Nets is actually zero).

Neutral Nets have been shown to work much better than similarly mistyped methods, such as Adaboot, Support Sector Machines, Latent Dirigible Allocation, Local Linear Embezzling, Maximum Barging classifiers, Crassification Tees, Booted Stomps, Eulogistic Regression, Fixture of Russians, Constricted Boltzmann Machines, Principal Opponent Analysis, Variational Plays, and most Colonel Methods such as Prussian Grossest Regression.

A popular, but particularly complicated, variation of Neutral Nets is Convoluted Neutral Nets. They are design to be invariant to shifts. In fact they can be shown to be invariant to every single transformation known to humankind.

I'm going to start working on "Deep Neutral Networks" ASAP.

Unlike current deep models, Deep Neutral Network are neither positive nor negative.

They consist of multiple layers of linear operators interspersed with "FlU" activation functions whose output is constant and equal to zero (FLU stands for Flat Units). The big advantage of FlUs over ReLUs is that you don't need to use drop out. FlUs are dropped out by construction.

You can pre-train Deep Neutral Networks with unsupervised learning, using NNNPMF (Neither Negative Nor Positive Matrix Factorization).

But unsupervised pre-training is superfluous since Neutral Networks never seem to exhibit overfitting problems. Their VC dimension doesn't depend on the number of parameters (it has been suggested that the VC dim of Neutral Nets is actually zero).

Neutral Nets have been shown to work much better than similarly mistyped methods, such as Adaboot, Support Sector Machines, Latent Dirigible Allocation, Local Linear Embezzling, Maximum Barging classifiers, Crassification Tees, Booted Stomps, Eulogistic Regression, Fixture of Russians, Constricted Boltzmann Machines, Principal Opponent Analysis, Variational Plays, and most Colonel Methods such as Prussian Grossest Regression.

A popular, but particularly complicated, variation of Neutral Nets is Convoluted Neutral Nets. They are design to be invariant to shifts. In fact they can be shown to be invariant to every single transformation known to humankind.

Post has attachment

Public

As a dad, a programmer, and a BBN employee, I had to back this.

Post has shared content

Public

A nice graphical introduction to share with git beginners.

Learn #Git Branching with this incredibly well thought interactive game

Post has shared content

Public

Wow, Google Plus actually had something interesting in its "What's Hot and Recommended" for once. I'm seriously considering doing this.

Using clutch pedals with vim. #winning

Post has attachment

Public

There are few things better than finding a new author you greatly enjoy, and then finding out they've written 47 novels of fairly even quality.

Post has attachment

Public

Post has attachment

Public

+Dianna Gabbard and I do this all the time, and I highly recommend it with the right work. Since we got married we've done _The Lord of the Rings_,

*The Silmarillion*, the*Divine Comedy*(all of it -*Purgatory*is the best), both of Anne Fadiman's excellent books on essays on reading, and are currently halfway through Chesterton's collected Father Brown Stories. I'm sure she'll remind me of some things I've forgotten. Post has shared content

Public

"Classic Nintendo Games are (NP-)Hard"

"For Mario and Donkey Kong, we show NP-completeness. In addition, we observe that several games in the Zelda series are PSPACE-complete"

Via: Scott Aarronson

"For Mario and Donkey Kong, we show NP-completeness. In addition, we observe that several games in the Zelda series are PSPACE-complete"

Via: Scott Aarronson

Public

"This setup is based on an exhibit from the early 1950s at the Museum of Science and Industry in Chicago, where the author was first introduced to the magic of switching circuits. The machine in Chicago, designed circa 1940 by W. Keister of Bell Telephone Laboratories allowed me to go first; yet I soon discovered there was no way to defeat it. Therefore I decided to move as stupidly as possible, hoping that the designer had not anticipated such bizarre behavior. In fact I allowed the machine to reach a position where it had two winning moves; and it seized

*both*of them! Moving twice is of course a flagrant violation of the rules, so I had won a moral victory even though the machine announced I had lost" ~ footnote in Donald Knuth,*The Art of Computer Programming, Volume 4* Post has shared content

Public

Here's how you explain virus reproduction to a 4-year-old (who's been eavesdropping on his big sister's science questions):

"Suppose you find Daddy's list of things to make for dinner. So

"Now, imagine that instead of you,

"Suppose you find Daddy's list of things to make for dinner. So

*you*get a pen and write 'Cookies' at the end of the list. What would happen? Oh, you're so sneaky!"Now, imagine that instead of you,

*a cookie*snuck out of the jar and wrote 'Cookies' at the end of the list. So the cookie tricked Daddy into making more cookies!"Wait while more posts are being loaded