Wayne Radinsky's interests

Wayne Radinsky's posts

Post has attachment

"Facial recognition coming to police body cameras." "'In the case of a missing child, imagine if the parent showed the child's photo to a nearby police officer on patrol. The officer's body-worn camera sees the photo, the AI engine 'learns' what the child looks like and deploys an engine to the body-worn cameras of nearby officers, quickly creating a team searching for the child,' Motorola Solutions Chief Technology Officer Paul Steinberg said in a press release."

Post has attachment

The winners of Google's Machine Learning Startup Competition. PicnicHealth "creates training data for precision medicine," LiftIgniter "is a machine learning personalization layer powering user interactions on every digital touchpoint," and BrainSpec does "a 'virtual biopsy' by measuring the concentrations of chemicals in the brain."

Post has attachment

"The limitations of deep learning." "Say, for instance, that you could assemble a dataset of hundreds of thousands -- even millions -- of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase."

"In general, anything that requires reasoning -- like programming, or applying the scientific method -- long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult."

"This is because a deep learning model is 'just' a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data."

"Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning."

"In general, anything that requires reasoning -- like programming, or applying the scientific method -- long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult."

"This is because a deep learning model is 'just' a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data."

"Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning."

Post has attachment

"It's been well known in the deep learning community for a long time that training neural networks on cat images actually improves performance."

"Some of the more recent networks achieving state-of-the-art results have become very large and require a lot of hardware to train them." "One way to fix this is by humbling your network. You can do this by feeding it sentences such as, 'You're not that good.' or 'You could get replaced by a linear model and no one would know the difference.' or even 'I could not anneal the learning rate and watch you diverge at any minute.'"

"Ever wonder how deep learning researchers find those obscure hyperparameters? In school they'll tell you it's random searching but there's a much darker secret behind them. One way to accomplish this is to sacrifice a GPU before you run your random hyperparameter search."

"Another well kept secret at Google is that the closer +Jeff Dean is to your GPU cluster, the faster it runs."

"Some of the more recent networks achieving state-of-the-art results have become very large and require a lot of hardware to train them." "One way to fix this is by humbling your network. You can do this by feeding it sentences such as, 'You're not that good.' or 'You could get replaced by a linear model and no one would know the difference.' or even 'I could not anneal the learning rate and watch you diverge at any minute.'"

"Ever wonder how deep learning researchers find those obscure hyperparameters? In school they'll tell you it's random searching but there's a much darker secret behind them. One way to accomplish this is to sacrifice a GPU before you run your random hyperparameter search."

"Another well kept secret at Google is that the closer +Jeff Dean is to your GPU cluster, the faster it runs."

Post has attachment

After finally being allowed into the US, the all-girl Afghan robotics team won a silver medal.

Post has attachment

I didn't know that if you made a sphere with tiles on it and put a light inside and projected the tiles onto a flat wall, depending on where you put the light, you can either preserve the angles but not the lines (they become curves), or preserve the lines but lose the angles.

Post has attachment

TensorPort is a cloud service for training TensorFlow models. "You probably know that training deep learning models is faster -- often order(s) of magnitude faster -- when parallelized and distributed across many GPU workers. TensorPort's infrastructure is capable of running your experiments at huge scale, on terabytes of data with hundreds of GPU workers. Our streamlined project creation process makes it easy to quickly set up multiple distributed experiments to run in parallel."

"With TensorPort, you can be sure your code is running on the best available hardware at the best possible price. We bill compute usage by the minute at prices lower than any major cloud computing provider (feel free to compare our prices yourself!) and offer built-in timing tools making it easy for you to control your spending."

"With TensorPort, you can be sure your code is running on the best available hardware at the best possible price. We bill compute usage by the minute at prices lower than any major cloud computing provider (feel free to compare our prices yourself!) and offer built-in timing tools making it easy for you to control your spending."

Post has attachment

This video claims that the most peaceful countries are becoming more peaceful and the most violent countries are becoming more violent. It also says, on the ranking, the US is trending downward, unlike other developed countries. What's not clear is whether the actual level of violence is going down, or whether the US is just going down on the ranking because other countries are becoming more peaceful -- it's just a ranking. So I went and got the raw scores (available from http://visionofhumanity.org/indexes/global-peace-index/ -- each year's report is a ~100-page PDF file) (you get a glimpse of the scores in the video, too). Raw scores for the US (smaller == more peaceful) are:

2017 - 2.232

2016 - 2.154

2015 - 2.038

So there is evidence that the US is becoming actually less peaceful, though the trend is not strong, and there are only 3 years of data which isn't much.

2017 - 2.232

2016 - 2.154

2015 - 2.038

So there is evidence that the US is becoming actually less peaceful, though the trend is not strong, and there are only 3 years of data which isn't much.

Post has attachment

Robust adversarial examples. "We've created images that reliably fool neural network classifiers when viewed from varied scales and perspectives."

"This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it's a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts."

"Adversarial examples can be created using an optimization method called projected gradient descent to find small perturbations to the image that arbitrarily fool the classifier."

"Instead of optimizing for finding an input that's adversarial from a single viewpoint, we optimize over a large ensemble of stochastic classifiers that randomly rescale the input before classifying it. Optimizing against such an ensemble produces robust adversarial examples that are scale-invariant."

"This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it's a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts."

"Adversarial examples can be created using an optimization method called projected gradient descent to find small perturbations to the image that arbitrarily fool the classifier."

"Instead of optimizing for finding an input that's adversarial from a single viewpoint, we optimize over a large ensemble of stochastic classifiers that randomly rescale the input before classifying it. Optimizing against such an ensemble produces robust adversarial examples that are scale-invariant."

Post has attachment

"New fast.ai course: Computational Linear Algebra." Course includes an online textbook and a series of videos, and covers "applications (using Python) such as how to identify the foreground in a surveillance video, how to categorize documents, the algorithm powering Google's search, how to reconstruct an image from a CT scan, and more."

"Jeremy Howard and Rachel Thomas developed this material for a numerical linear algebra course we taught in the University of San Francisco's Masters of Analytics program, and it is the first ever numerical linear algebra course, to our knowledge, to be completely centered around practical applications and to use cutting edge algorithms and tools, including PyTorch, Numba, and randomized SVD. It also covers foundational numerical linear algebra concepts such as floating point arithmetic, machine epsilon, singular value decomposition, eigen decomposition, and QR decomposition."

"Jeremy Howard and Rachel Thomas developed this material for a numerical linear algebra course we taught in the University of San Francisco's Masters of Analytics program, and it is the first ever numerical linear algebra course, to our knowledge, to be completely centered around practical applications and to use cutting edge algorithms and tools, including PyTorch, Numba, and randomized SVD. It also covers foundational numerical linear algebra concepts such as floating point arithmetic, machine epsilon, singular value decomposition, eigen decomposition, and QR decomposition."

Wait while more posts are being loaded