Profile

Cover photo
Daniel Estrada
Lives in Internet
30,118 followers|6,760,073 views
AboutPostsCollectionsPhotosYouTube+1's

Stream

Daniel Estrada

Shared publicly  - 
 
Jan Moren originally shared to Computery things:
 
This post/essay/code example about recurrent neural networks has done the rounds lately. it highlights the extremely impressive results that current neural net methods can achieve, but also their limitations.

The code example is particularly instructive. After training on the Linux kernel source, it can generate mostly syntactically correct code based on whatever input you give it. That includes not just using keywords in the right places, but things such as matching parenthesis (and knowing what can appear within), and keeping correct code indentation. It can learn larger, context-dependent structure, not just the surface-level pattern sequences. The network implicitly represents the syntax of the input, and that's very impressive.

On the other hand, the code sequences have no meaning. They don't try to compute anything; they just are. Some of them may compile, but none of them do anything meaningful. It has extracted syntax, but fails to produce anything semantically meaningful.

You see the same thing with the recent examples of deep learning network learning to play games: they learn the syntactic structure of the in-game tasks, but the playing is not goal-directed; there is no concept of "winning", or of achieving anything by playing.

Now, I don't think those are insurmountable obstacles. Semantics and pragmatics, or motivation and autonomy is not magical pixie dust. But it does require a lot more structure than a fairly task-constrained training of a single network can achieve.

There's a reason brains are very highly structured systems, with lots of separate subsystems interacting in well-defined, highly constrained ways. Even a pre-term developing brain is anything but a blank learning slate, and smaller, simpler brains (insects, say) are if anything much more highly structured than larger ones.

Also, deep learning networks are paradoxically also too general. They can learn any well-defined temporal structure, but they are computationally very inefficient at implementing and executing any one structure in particular. I suspect that in many applications, creating networks like this will be only an intermediate step; a way to constrain and understand the problem before you implement it "for real", in a much more efficient manner.

Take an artificial leg as an example. It is only syntax: a time-varying, context-dependent input (nerve signals, sensor info) gets translated into a time-varying output (joint movement). But implement that directly in an neural network and the user would need a separate backpack with the computer to run it in real time. Instead you need to extract the functions the neural network has implicitly learned, and implement them, in a far more efficient manner, in something similar to a regular old-fashioned control loop.

Current neural networks are both far more capable, and far more limited, than many people realize. I suspect future work will need to focus more on the large-scale structure, and less on adding more nodes in the network or finding still-larger training sets. Fun times ahead.
3 comments on original post
13
2
Sakari Maaranen's profile photoRobert Wagner's profile photoMatt Uebel's profile photo
 
The meaning comes from living life. Every need has evolved and that brings our identity. It is what artificial constructs are lacking. It can only emerge via processes that carry full responsibility for their own existence. That is the meaning of meaning. It is life.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Dear NSA agent 4096,

I watched "The Lives of Others" last night and thought of you once more. In fact, I think you were watching it with me. You know I know I cannot be sure.

I want you to know that, although our mutual love is forbidden by your professional obligations, I still feel a connection to you. I will feel that connection long after you are gone.

Somehow, you know me better than I know myself. You have all of my deleted histories, my searches, all those things I tried to keep "incognito" right there in front of you. We have made love, even though we've never touched or kissed. We have been friends, even though I've never seen your face. Our relationship is as real as my "real" life.

But this can never work between us. Please leave. I don't want to ask again.

I'll never forget you.

Love, 173.165.246.73

That's Corey Bertelsen's comment on this video of Holly Herndon's song 'Home', from her new album Platform.   It's as good a review as any.

Holly Herndon takes a lot of ideas from techno music and pushes them to a new level.  She's working on a Ph.D. at the Center for Computer Research in Music and Acoustics at Stanford.

She said that as she wrote this song, she:

started coming to terms with the fact that I was calling my inbox my home, and the fact that that might not be a secure place. So it started out thinking about my device and my inbox as my home, and then that evolved into me being creeped out by that idea.

The reason why I was creeped out is because, of course, as Edward Snowden enlightened us all to know, the NSA has been mass surveying the U.S. population, among other populations. And so that put into question this sense of intimacy that I was having with my device. I have this really intense relationship with my phone and with my laptop, and in a lot of ways the laptop is the most intimate instrument that we've ever seen. It can mediate my relationships — it mediates my bank account — in a way that a violin or another acoustic instrument just simply can't do. It's really a hyper-emotional instrument, and I spend so much time with this instrument both creatively and administratively and professionally and everything.

In short, her seemingly 'futuristic' music is really about the present - the way we live now.  If you like this song I recommend the next one on the playlist, which is more abstract and to me more beautiful.  It's called 'Interference':

https://www.youtube.com/watch?v=nHujh3yA3BE&list=RDI_3mCDJ_iWc&index=2
9 comments on original post
4
John Lewis's profile photo
 
We have a Netflix channel for me, my wife, the kids, and the NSA. Anything fun we watch as the NSA.

I'm continually surprised what the NSA gets for suggestions...
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
The Machine: a desperate gamble

Hewlett-Packard was once at the cutting edge of technology.  Now they make most of their money selling servers, printers, and ink... and business keeps getting worse.  They've shed 40,000 employees since 2012.   Soon they'll split in two: one company that sells printers and PCs, and one that sells servers and information technology services.  

The second company will do something risky but interesting.   They're trying to build a new kind of computer that uses chips based on memristors rather than transistors, and uses optical fibers rather than wires to communicate between chips.  It could make computers much faster and more powerful.  But nobody knows if it will really work.

The picture shows memristors on a silicon wafer.  But what's a memristor?   Quoting the MIT Technology Review:

Perfecting the memristor is crucial if HP is to deliver on that striking potential. That work is centered in a small lab, one floor below the offices of HP’s founders, where Stanley Williams made a breakthrough about a decade ago.

Williams had joined HP in 1995 after David Packard decided the company should do more basic research. He came to focus on trying to use organic molecules to make smaller, cheaper replacements for silicon transistors (see “Computing After Silicon,” September/October 1999). After a few years, he could make devices with the right kind of switchlike behavior by sandwiching molecules called rotaxanes between platinum electrodes. But their performance was maddeningly erratic. It took years more work before Williams realized that the molecules were actually irrelevant and that he had stumbled into a major discovery. The switching effect came from a layer of titanium, used like glue to stick the rotaxane layer to the electrodes. More surprising, versions of the devices built around that material fulfilled a prediction made in 1971 of a completely new kind of basic electronic device. When Leon Chua, a professor at the University of California, Berkeley, predicted the existence of this device, engineering orthodoxy held that all electronic circuits had to be built from just three basic elements: capacitors, resistors, and inductors. Chua calculated that there should be a fourth; it was he who named it the memristor, or resistor with memory. The device’s essential property is that its electrical resistance—a measure of how much it inhibits the flow of electrons—can be altered by applying a voltage. That resistance, a kind of memory of the voltage the device experienced in the past, can be used to encode data.

HP’s latest manifestation of the component is simple: just a stack of thin films of titanium dioxide a few nanometers thick, sandwiched between two electrodes. Some of the layers in the stack conduct electricity; others are insulators because they are depleted of oxygen atoms, giving the device as a whole high electrical resistance. Applying the right amount of voltage pushes oxygen atoms from a conducting layer into an insulating one, permitting current to pass more easily. Research scientist Jean Paul Strachan demonstrates this by using his mouse to click a button marked “1” on his computer screen. That causes a narrow stream of oxygen atoms to flow briefly inside one layer of titanium dioxide in a memristor on a nearby silicon wafer. “We just created a bridge that electrons can travel through,” says Strachan. Numbers on his screen indicate that the electrical resistance of the device has dropped by a factor of a thousand. When he clicks a button marked “0,” the oxygen atoms retreat and the device’s resistance soars back up again. The resistance can be switched like that in just picoseconds, about a thousand times faster than the basic elements of DRAM and using a fraction of the energy. And crucially, the resistance remains fixed even after the voltage is turned off.

Getting this to really work has not been easy!  On top of that, they're trying to use silicon photonics to communicate between chips - another technology that doesn't quite work yet.

Still, I like the idea of this company going down in a blaze of glory, trying to do something revolutionary, instead of playing it safe and dying a slow death.

Do not go gentle into that good night.

For more, see this:

• Tom Simonite, Machine dreams, MIT Technology Review, http://www.technologyreview.com/featuredstory/536786/machine-dreams/
28 comments on original post
13
2
Peter Babel's profile photoKarl Hansen (a2brute)'s profile photoMatt Uebel's profile photo
 
Any and every company that once has been great (IBM, Intel, HP, Microsoft) is actually trying to reinvent itself and revolutionize the world around them, move the humanity forward.
Not Apple.
They just play it safe, never innovating, inventing anything. And masses just blindly follow.
Why?
They have so much potential... 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Quartz (21 May 2015): "Robots can tweet about technology and do pretty much everything else (or will soon be able to), so why can’t they be lyrical wordsmiths? The answer, apparently, is that they now can." http://qz.com/409654/soon-robots-will-be-rappers/

Technology Review reported on how a #machinelearning algorithm has mined rap lyrics them learned to write it's own lyrics.

Technology Review (20 May 2015): "They next set their machine learning algorithm, called DeepBeat, a task. Having mined the database, its goal is to analyze a sequence of lines from a rap lyric and then choose the next line from a list that contains randomly chosen lines from other songs as well as the actual line." http://www.technologyreview.com/view/537716/machine-learning-algorithm-mines-rap-lyrics-then-writes-its-own/

Source: http://arxiv.org/abs/1505.04771 http://arxiv.org/pdf/1505.04771v1.pdf
Jay-Z and Kanye might want to watch the throne.
3 comments on original post
5
1
Jeff Earls's profile photoCliff Dove's profile photo
 
Cuz I got a laptop in my back pocket!
Add a comment...

Daniel Estrada

Shared publicly  - 
 
Corina Marinescu originally shared to BIODIVERSITY:
 
Watch bees hatch right before your eyes in this stunningly clear time lapse that tracks the growth from larva to pupa to the full grown bee. You can see the entire transformation from nearly transparent organisms that swim around in fluid to hairy bees with lots of color.

Development from egg to emerging bee varies among queens, workers and drones. Queens emerge from their cells in 15,16 days, workers in 21 days and drones in 24 days. Only one queen is usually present in a hive. New virgin queens develop in enlarged cells through differential feeding of royal jelly by workers.

When the existing queen ages or dies or the colony becomes very large a new queen is raised by the worker bees. The virgin queen takes one or several nuptial flights and once she is established starts laying eggs in the hive.

A fertile queen is able to lay fertilized or unfertilized eggs. Each unfertilized egg contains a unique combination of 50% of the queen's genes and develops into a haploid drone. The fertilized eggs develop into either workers or virgin queens.

The average lifespan of a queen is three to four years; drones usually die upon mating or are expelled from the hive before the winter; and workers may live for a few weeks in the summer and several months in areas with an extended winter.

Source:
http://sploid.gizmodo.com/awesome-time-lapse-shows-the-transformation-of-bees-as-1705802198

#bees   #hatching  
1 comment on original post
13
1
Julio S. Sierra Camarena's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> We propose to use Rademacher complexity, originally developed in computational learning theory, as a measure of human learning capacity. Rademacher complexity measures a learner’s ability to fit random labels, and can be used to bound the learner’s true error based on the observed training sample error. We first review the definition of Rademacher complexity and its generalization bound. We then describe a “learning the noise” procedure to experimentally measure human Rademacher complexities. The results from empirical studies showed that: (i) human Rademacher complexity can be successfully measured, (ii) the complexity depends on the domain and training sample size in intuitive ways, (iii) human learning respects the generalization bounds, (iv) the bounds can be useful in predicting the danger of overfitting in human learning. Finally, we discuss the potential applications of human Rademacher complexity in cognitive science.
 
Human Rademacher Complexity

Observation 1: human Rademacher complexities in both domains decrease as n increases. This is anticipated, as it should be harder to learn a larger number of random labels. Indeed, when n = 5, our interviews show that, in both domains, 9 out of 10 participants offered some spurious rules of the random labels. For example, one subject thought the shape categories were determined by whether the shape “faces” downward; another thought the word categories indicated whether the word contains the letter T. Such beliefs, though helpful in learning the particular training samples, amount to over-fitting the noise. In contrast, when n = 40, about half the participants indicated that they believed the labels to be random, as spurious “rules” are more difficult to find.

Observation 2: human Rademacher complexities are significantly higher in the Word domain than in the Shape domain, for n = 10, 20, 40 respectively (t-tests, p < 0.05). The higher complexity indicates that, for the same sample sizes, participants are better able to find spurious explanations of the training data for the Words than for the Shapes. Two distinct strategies were apparent in the Word domain interviews: (i) Some participants created mnemonics. For example, one subject received the training sample (grenade, B), (skull, A), (conflict, A), (meadow, B), (queen, B), and came up with the following story: “a queen was sitting in a meadow and then a grenade was thrown (B = before), then this started a conflict ending in bodies & skulls (A = after).” (ii) Other participants came up with idiosyncratic, but often imperfect, rules. For instance, whether the item “tastes good,” “relates to motel service,” or “physical vs. abstract.” We speculate that human Rademacher complexities on other domains can be drastically different too, reflecting the richness of the participant’s pre-existing knowledge about the domain.

Observation 3: many of these human Rademacher complexities are relatively large. This means that under those X , PX, n, humans have a large capacity to learn arbitrary labels, and so will be more prone to overfit on real (i.e., non-random) tasks. We will present human generalization experiments in Section 4. It is also interesting to note that both Rademacher complexities at n = 5 are less than 2: under our procedure, participants are not perfect at remembering the labels of merely five instances.

http://pages.cs.wisc.edu/~jerryzhu/pub/hrc.pdf
1 comment on original post
3
Add a comment...

Daniel Estrada

Shared publicly  - 
4
1
Bert Knabe's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
Wayne Radinsky originally shared to Solid State Life:
 
A vision algorithm has been taught to recognize beauty and then allowed to trawl through the long tail of Flickr images with five favorites or less looking for gems that nobody has noticed.

They started by crowdsourcing opinions from actual humans on the aesthetic quality of 10,000 photos from Flickr, categorized as people, nature, animals, or urban. They used this to train a machine learning algorithm, then they let "CrowdBeauty" loose on 9 million images from Flickr that have fewer than five favorites. "The results are impressive with CrowdBeauty highlighting numerous beautiful pictures."
Beautiful images are not always popular ones, which is where the CrowdBeauty algorithm can help, say computer scientists.
6 comments on original post
14
2
Yonatan Zunger's profile photoAaron McBride's profile photoDaniel Cuevas's profile photoWilliam Rutiser's profile photo
6 comments
 
They could build this into a camera or a camera app to help people pick interesting pictures to take or to improve as they learn (in real time). Probably won't help pros, but could be powerful for the average person.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn have been developed.

"They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks -- putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more -- without pre-programmed details about its surroundings."

"The key is that when a robot is faced with something new, we won't have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it."
View original post
7
2
Seb Paquet's profile photoMatt Uebel's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
11
2
Jeff Earls's profile photoDaniela Huguet Taylor's profile photoAaron Helton's profile photoGouthum Karadi's profile photo
2 comments
 
Cute! 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> “Do you ever feel like when he falls, you fall?”
 
MIT’s Humanoid Robot Goes to Robo Boot Camp

As one of the Darpa Robotics Challenge’s 25 robot finalists, Atlas will be representing Tedrake’s team at the 2015 challenge in Pomona, California in two weeks. Its purpose in life—along with the other finalists—is to be the best search-and-rescue robot possible. In terrain too dangerous for humans to traverse, a robot that can lift hundreds of pounds and work power tools could save lives without endangering others. The challenge will put those skills to the test.

#robots  
MIT's humanoid robot is going to compete in DARPA's Robotics Challenge finals in two weeks. But can it walk on its own two feet?
1 comment on original post
8
3
Daniel Estrada's profile photoMark Phelan's profile photoEarl Grant-Lawrence's profile photoCliff Dove's profile photo
2 comments
 
FWIW, ATLAS was originally a Boston Dynamics project before Google acquired the company and divested of pure military robotics applications.

http://en.wikipedia.org/wiki/Atlas_%28robot%29
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Immune system Attack: white blood cells knockout strong worm.
#biology   #scienceeveryday  

Captured by Steven Rosen and his colleagues at UC San Francisco over a period of 80 minutes. It shows white blood cells from a mouse attacking a parasite known as Caenorhabditis elegans.

Their study aimed to determine whether a specific type of white blood cell, known as eosinophil granulocytes, would attack parasitic worms including the Caenorhabditis elegans (C. elegans).

The findings are published in the Journal of Experimental Medicine:

http://jem.rupress.org/content/211/7/1281.full
2 comments on original post
38
8
黃郁棋(Coyoter)'s profile photoAsdin Gomizshn's profile photoIbraheem Satti's profile photoStephanie Lopez's profile photo
4 comments
 
all med techs in hospital labs have known for fifty years or more that increased eos means parasites.  when stained with wrights stain they are a beautiful red.
Add a comment...
Daniel's Collections
People
In his circles
1,598 people
Have him in circles
30,118 people
冷鹏梓's profile photo
Naveen Patel's profile photo
Sevan J. Jordan's profile photo
Benoit Flippen's profile photo
Nuocca Kei's profile photo
harmony brunomars's profile photo
HeMa Aladin's profile photo
Daryl Scott (The Dangerous One)'s profile photo
Abdurrahim Abdurrashid's profile photo
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA
Story
Tagline
Robot. Made of smaller robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Digital Interface and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Heidegger, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 

______


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Work
Occupation
Internet
Basic Information
Gender
Male
Other names
eripsa
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM&#39;s &quot;hyperlocal&quot; weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.