Profile cover photo
Profile photo
Daniel Estrada
Robot. Made of robots.
Robot. Made of robots.

Daniel's posts

Post has shared content
The Right to Attention in an Age of Distraction
We are living through a crisis of attention that is now widely remarked upon, usually in the context of some complaint or other about technology. That’s how Matthew Crawford starts his 2015 book The World Beyond Your Head , his inquiry into the self in an a...

Post has attachment
> Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.

via +Roman Yampolskiy

// AI experts estimate that computers will beat human Starcraft players in around 5 years, will talk with convincing speech in 10, will write best selling novels in 30, and will achieve general parity with human performance in 45 years.

Many of these tasks have already had significant defeats (like Poker and Go). The article clarifies that Go needs to be won with human-scale training. From Table S5:

> Defeat the best Go players, training only on as many
games as the best Go players have played. For reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life.

// The reasoning here is that AlphaGo is better at Go because it has more experience than its human counterparts, giving it a profound advantage over any human player.

To me, the interesting thing here is how we're crafting the bounds of what constitutes "human level performance" as a kind of defensive reaction against the encroaching machine.

Post has shared content
// 2017 goddamn
Microbot Drags Lazy Sperm To Final Destination.
Join the Simple Science and Interesting Things Community and share interesting stuff!

So far, most microbot experiments have been done in vitro under conditions very different from those in the human body. Many devices rely on toxic fuels, such as hydrogen peroxide. They are simple to steer in a Petri dish, but harder to control in biological fluids full of proteins and cells, and through the body's complex channels and cavities
Animated Photo

Post has attachment
> When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts. It’s more obvious in some cases than others, but a large fraction of recent models exhibit this behavior.

Mysteriously, the checkerboard pattern tends to be most prominent in images with strong colors. What’s going on? Do neural networks hate bright colors? The actual cause of these artifacts is actually remarkably simple, as is a method for avoiding them.

via +Tyler Millhouse

Post has attachment

Post has attachment
// TIL National Quiz Bowl has been running an AI vs Human trivia tournament since 2015 against QANTA, an AI from U Maryland that has trained against Ken Jennings. 2015 ended in a draw/malfunction, and the 2016 match (linked here) ended in a decisive human victory (345-145). Quiz Bowl 2017 happens this weekend, and QANTA’s rematch is on Saturday night.

Links: 2016 Match:
Demo with Ken Jennings:
Project homepage:
2017 Quiz Bowl:
via Dennis Loo

Post has shared content
Last March, we used a machine learning system called AlphaGo to master the ancient game of Go—a 2,500 year-old board game—defeating legendary player Lee Sedol. This week at the Future of Go Summit, you can watch AlphaGo face off with top-ranked Go player Ke Jie live. Learn more about Go and watch Match 1 of #AlphaGo17 live tonight at 7:30 p.m. PT →
Animated Photo

Post has shared content
Google’s speech recognition technology now has a 4.9% word error rate

Google CEO Sundar Pichai today announced that the company’s speech recognition technology has now achieved a 4.9 percent word error rate. Put another way, Google transcribes every 20th word incorrectly. That’s a big improvement from the 23 percent the company saw in 2013 and the 8 percent it shared two years ago at I/O 2015. The tidbit was revealed at Google’s I/O 2017 developer conference, where a big emphasis is on artificial intelligence. Deep learning, a type of AI, is used to achieve accurate image recognition and speech recognition. The method involves ingesting lots of data to train systems called neural networks, and then feeding new data to those systems in an attempt to make predictions. “We’ve been using voice as an input across many of our products,” Pichai said onstage. “That’s because computers are getting much better at understanding speech. We have had significant breakthroughs, but the pace even since last year has been pretty amazing to see. Our word error rate continues to improve even in very noisy environments. This is why if you speak to Google on your phone or Google Home, we can pick up your voice accurately.”

Post has shared content
"Volvo's autonomous garbage truck reports for duty." "With plans for more efficiency and improved safety, the transportation giant has debuted a driverless trash truck prototype that actually works -- in every sense of the word."

"Volvo worked in collaboration with Swedish waste service company Renova to develop this high-tech truck. It's got a pre-programmed trash route set in the computer, allowing it to drive from site to site without human aid. This simplifies the collection service, enabling the onboard crew to work solely on picking up trash without having to get in and out of the truck."

Post has shared content
Using Machine Learning to Explore Neural Network Architecture

At Google, we have successfully applied deep learning models to many applications, from image recognition to speech recognition to machine translation. Typically, our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise. Our GoogleNet architecture. Design of this network required many years of careful experimentation and refinement from initial versions of convolutional architectures. To make this process of designing machine learning models much more accessible, we’ve been exploring ways to automate the design of machine learning models. Among many algorithms we’ve studied, evolutionary algorithms [1] and reinforcement learning algorithms [2] have shown great promise. But in this blog post, we’ll focus on our reinforcement learning approach and the early results we’ve gotten so far.
Wait while more posts are being loaded