Profile

Cover photo
Daniel Estrada
Lives in Internet
30,188 followers|7,488,530 views
AboutPostsCollectionsPhotosYouTube+1's

Stream

Daniel Estrada

Shared publicly  - 
 
> Interestingly, by calculating spiking history SNR (as is allowed by their generalized SNR definition), the scientists demonstrated that taking into account the neuron's biophysical processes – such as absolute and relative refractory periods (the periods after the action potential, when the neuron cannot spike again or can spike with low probability, respectively), bursting propensity (the period of a neuron's rapid action potentials ), local network dynamics, and, in this case, spiking history – is often a more informative predictor of spiking propensity than the signal or stimulus activating the neuron.
 
Decoding the brain: Scientists redefine and measure single-neuron signal-to-noise ratio http://ow.ly/314dNN
(Phys.org)—The signal-to-noise ratio, or SNR, is a well-known metric typically expressed in decibels and defined as a measure of signal strength relative to background noise – and in statistical terms as the ratio of the squared amplitude or variance of a signal relative to the variance of the noise. ...
1 comment on original post
7
1
Quantum Paradigm's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Somebody fed clips from "2001: A Space Odyssey" into Google's "Deep Dreamer"
3 comments on original post
5
1
Jonathan Mugan's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Artist Installs Flocks of Surveillance Cameras and Satellite Dishes in Outdoor Settings

http://www.thisiscolossal.com/2015/07/surveillance-installations-jakub-geltner/
1 comment on original post
12
1
David Haun's profile photoDaniel Estrada's profile photoBrian Geniesse (Feranix)'s profile photo
2 comments
 
+David Haun I hope all the other surveillance cameras follow these ones out to the sea. 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Still, even the chemical periodic table has some fuzzy matching going on – isotopes still group together under a given element, despite variation. “In the same way, elements can have different isotopes,…a niche category could have phenotypic variants but still have ecological properties or functions that are essentially the same.” In particular, the authors argue that convergent evolution has recreated particular suites of traits (niches) in different habitats and distantly related taxa. This has some connection to the idea that, perhaps, much like complex systems, complex arrays of traits may reoccur because they provide stability (e.g. are selected for).

http://evol-eco.blogspot.com/2015/07/can-there-be-periodic-table-of-niches.html

via +David Bapst 
Are there a limited number of categories or groupings into which all niches can be classified?  I’ll  admit that my first reaction is skepticism. For those ecologists who think of the similarities and generalities across sys...
14
2
Ibrahim Khalil's profile photojhun pancho's profile photoPaulina Friedman's profile photoAlok Tiwari's profile photo
2 comments
 
#3#9))(
Add a comment...

Daniel Estrada

Shared publicly  - 
10
10
Kevin Kelly's profile photojohn s's profile photoRafael Espericueta's profile photoEric S. Johansson's profile photo
2 comments
 
It's something Hunter Thompson would have loved.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Perceptions are 90 percent expectations

The sensory centers of our brain are largely activated by anticipatory computations

Have you ever noticed how the regular drip of a faucet completely fades out after a while when you are resting comfortably? It is, however, the sudden absence of this acoustic stimulus that unmistakably pops up in your mind. This everyday experience bears witness to a foresighted mechanism that, according to new research, governs the workings of perception: "Predictive Coding".

How does the world get into our head? The question seems pretty unproductive at first glance, as we obviously must only open our eyes to obtain a picture of reality. According to common sense, perception functions a bit like a submarine commander who peeps through a periscope and scans the horizon for suspicious activity. Another popular metaphor is the camera that films the course of events and forwards the input to a monitor in our head, in front of which sits an imaginary "homunculus", satisfying it curiosity. In reality, however, all comparisons with optical devices do our sensory apparatus a disservice. When we consciously perceive, we are more like viewers in a cinema who catch sight of a well-styled work of art, tinkered behind our backs by a script, directing, editing, censorship and other brownies.

Between the first draft on the retina (or in the other sense organs) and perception sits a hidden recognition service that constantly sets up and discards hypotheses, guesses missing locations, whitewashes inconsistencies and applies complicated mathematical formulas that would have overwhelmed us at school. The work of this "ratiomorphic system" remains hidden to our consciousness,  which only marvels at the finished results and takes them for granted.

The most controversial issue that has long been plaguing cognitive psychology is this: Is that which we perceive only a passive and mindless registration of stimulus information crackling onto our sense organs? Or is the picture in front of us decisively and from the outset affected by stored experiences, expectations and knowledge? In the first case, that of "direct perception", information processing is "bottom up" or "data driven", that is, from below, from the smallest sensory bits of information, up to the higher cognitive centers. Our intellect can then only interpret the final image, like an intelligence official quibbling over a satellite photo.

On the other hand, in case of "indirect perception",  expectations and previous experiences serve as foundation, controlling the act of perception. The process is here "top down" or "conceptually driven", that is, knowledge constantly slips into perceptions and generates hypotheses about the expected stimulus material. With every modification of the hypothesis, incoming sensory data obtain a new structure.

According to the current paradigm, perception of the outside world is not a passive process in which the "receiver" is passively fed sensory impressions. Rather, the organism at any time produces a "concurrent world model",  which includes hypotheses about the expected stimuli. These expected values are stored in long-term memory as a comprehensive simulation of external reality. During an ongoing act of perception, the retrieved hypotheses are checked against the incoming sensory data; perception is therefore an interactive process, which is taking shape through a gradual testing and refinement of predictions.

This new perspective skews the whole picture: Our expectations control what we perceive; memory and perception are inextricably linked. The world outside answers questions which our brain poses. The best evidence primarily originates from studies on the architecture and the activity patterns of the brain. The different areas of the brain are never connected to each other in only one direction; there are always feedback connections leading from the higher centers back to the lower centers. Even more importantly: In the sensory system they even make up the majority.

A few years ago, a seminal meta analysis evaluated numerous studies looking at the primary visual cortex - the drop-in center for optical input -  using functional magnetic resonance imaging. The inspection showed that this area is much more busy trying to process the feedback signals from the higher-level brain regions than to analyze information from the visual system. In other words: The activity of the primary visual cortex is surprisingly independent of external stimuli. More than 90 percent of the impulses that arrive there do not originate from the visual pathway  but from "higher" areas of the cerebral cortex.

Already two decades earlier, researchers had began to wonder what these feedback loops in the brain are all about. These considerations lead to the theory of the so-called "non-classical" effects of neurons of the visual cortex. Until then, it was believed that nerve cells in this area are all responsible for the representation of visual information. However, the functioning of some neurons can be explained more elegantly assuming that they compare the incoming signals with expectations. The researchers therefore called these neurons "error finders" and described their activity as "predictive coding".

The concept is traceable to telephone technology and data processing. Instead of transmitting the entire signal, it is often sufficient to consider only the deviation from the previous signal. When dubbing an image file, it doesn't make sense to indicate the color of each individual pixel separately. Only when the color changes from one to the next point, this information needs to be transmitted. By merely coding deviations from the expected, this method (which led to mp3 and the demise of the music industry) reduces transmission overhead and increases processing speed. The theory of the hypotheses testing brain assumes that a similar principle governs most brain functions.

The fact that knowledge determines vision can be easily demonstrated via the so-called "degraded images" (see the pictures below). At first glance, most people have great difficulties to recognize something meaningful. Having read the annotations, the identification runs smoothly. Even more: With some, if not all pictures it now becomes nearly impossible to carry back one's mind to the naïve state and to forget the identities of the images.

The interplay between base and superstructure in vision has also become increasingly likely since computer science has taken up the problem of perception. When you are developing artificial intelligence, you inevitably face the question of the architecture of perception. The advantage of a program lies in the fact that it forces the researchers to formulate any assumptions explicitly. And here there is a clear trend: All programs that have been developed for machine perception are based on a form of "cognitive mediation". The computer is always fed with assumptions about the nature of the expected segment of the outside world, and this knowledge paves the way for the processing of the visual patterns captured by the camera. 

Ask yourself how any artificial intelligence uninstructed by assumptions and expectations could navigate the physical world. The problem would be that such a program could not know which visual data are important in the current situation. The software would be constantly on the verge of a crash, even missing only a tiny bit of information. It could not fill in gaps through the context or through foreknowledge - just as humans can in the case of degraded images. We have no idea how often our perception completes degraded images in everyday life. It is even thought that most visual stimuli are underdetermined, meaning not sufficient to uniquely identify the object by themselves. For a computer that has to work its way up from the bottom of the sensory facts without "higher" support, degraded images would eternally remain eternally meaningless doodles.

Literature: Jakob Hohwy, The Predictive Mind

http://www.amazon.com/Predictive-Mind-Jakob-Hohwy/dp/0199686734/ref=sr_1_1?s=books&ie=UTF8&qid=1435935038&sr=1-1&keywords=The+Predictive+Mind
View original post
17
7
Mike G's profile photoRhesa Rozendaal's profile photoTom Jacob's profile photoAmory Green (RaincityRoller)'s profile photo
11 comments
 
Thanks for the supplement +Rolf Degen 
Add a comment...
In his circles
1,608 people
Have him in circles
30,188 people
John Bubb's profile photo
John Lewis's profile photo
Robert Temple's profile photo
Michael Terrell's profile photo
Janelle Fortelny's profile photo
emo burgos's profile photo
Jim Lion's profile photo
Colin Cammarano's profile photo
Jon Houser's profile photo

Daniel Estrada

Shared publicly  - 
 
 
Structure of Human Brain has an Almost Ideal Network of Connections

Full article at http://neurosciencenews.com/neural-networks-evolution-brain-2203/.

A new study by Northeastern physicist Dmitri Krioukov and his colleagues suggests an answer: to expedite the transfer of information from one brain region to another, enabling us to operate at peak capacity.

The research is in Nature Communications. (full open access)

Research: "Navigable networks as Nash equilibria of navigation games" by András Gulyás, József J. Bíró, Attila Kőrösi, Gábor Rétvári and Dmitri Krioukov in Nature Communications doi:10.1038/ncomms8651

Image: Krioukov and his colleagues discovered that the structure of the human brain has an almost ideal network of connections (magenta), enabling optimal transmission of information from one part of the brain to another. Image credit: Krioukov.

#neuroscience   #evolution   #neuralnetworks  
3 comments on original post
10
2
Dimitrios Diamantaras's profile photoJP Glutting's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Hertzfeldt’s signature sense of humor is in full effect, but deeper themes are also in play. The result is a lovely, frightening, and painfully funny look at a future in which the technology we’ve eagerly developed to enrich our lives has dreadfully backfired.

https://vimeo.com/ondemand/worldoftomorrow
http://www.pixable.com/article/hertzfeldts-latest-world-tomorrow-playful-dark-look-future-80781
via +Jon Lawhead 

// omg yessss
7
3
Haseeb Akram's profile photoMonica Llanes's profile photoPraveen Kulkarni's profile photoJ Tang's profile photo
3 comments
 
Things are becoming out of date in hours.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Chaos made simple

This shows a lot of tiny particles moving around.   If you were one of these particles, it would be hard to predict where you'd go.  See why?  It's because each time you approach the crossing, it's hard to tell whether you'll go into the left loop or the right one. 

You can predict which way you'll go: it's not random.  But to predict it, you need to know your position quite accurately.  And each time you go around, it gets worse.  You'd need to know your position extremely accurately to predict which way you go — left or right — after a dozen round trips. 

This effect is called deterministic chaos.  Deterministic chaos happens when something is so sensitive to small changes in conditions that its motion is very hard to predict in practice, even though it's not actually random.

This particular example of deterministic chaos is one of the first and most famous.  It's the Lorenz attractor, invented by Edward Lorenz as a very simplified model of the weather in 1963.

The equations for the Lorentz attractor are not very complicated if you know calculus.  They say how the x, y and z coordinates of a point change with time:

dx/dt = 10(x-y)
dy/dt = x(28-z) - y
dz/dt = xy - 8z/3

You are not supposed to be able to look at these equations and say "Ah yes!  I see why these give chaos!"   Don't worry: if you get nothing out of these equations, it doesn't mean you're "not a math person"  — just as not being able to easily paint the Mona Lisa after you see it doesn't mean you're "not an art person".  Lorenz had to solve them using a computer to discover chaos.  I personally have no intuition as to why these equations work... though I could get such intuition if I spent a week reading about it.

The weird numbers here are adjustable, but these choices are the ones Lorenz originally used.  I don't know what choices David Szakaly used in his animation.  Can you find out?

If you imagine a tiny drop of water flowing around as shown in this picture, each time it goes around it will get stretched in one direction.  It will get squashed in another direction, and be neither squashed nor stretched in a third direction. 

The stretching is what causes the unpredictability: small changes in the initial position will get amplified.  I believe the squashing is what keeps the two loops in this picture quite flat.  Particles moving around these loops are strongly attracted to move along a flat 'conveyor belt'.  That's why it's called the Lorentz attractor.

With the particular equations I wrote down, the drop will get stretched in one direction by a factor of about 2.47... but squashed in another direction by a factor of about 2 million!    At least that's what this physicist at the University of Wisconsin says:

• J. C. Sprott, Lyapunov exponent and dimension of the Lorenz attractor, http://sprott.physics.wisc.edu/chaos/lorenzle.htm

He has software for calculating these numbers - or more precisely their logarithms, which are called Lyapunov exponents.  He gets 0.906, 0, and -14.572 for the Lyapunov exponents.

The region that attracts particles — roughly the glowing region in this picture — is a kind of fractal.  Its dimension is slightly more than 2, which means it's very flat but slightly 'fuzzed out'.  Actually there are different ways to define the dimension, and Sprott computes a few of them.  If you want to understand what's going on, try this:

• Edward Ott, Attractor dimensions, http://www.scholarpedia.org/article/Attractor_dimensions

For more nice animations of the Lorentz attractor, see:

http://visualizingmath.tumblr.com/post/121710431091/a-sample-solution-in-the-lorenz-attractor-when

David Szakaly has a blog called dvdp full of astounding images:

http://dvdp.tumblr.com/

and presumably this one of the Lorenz attractor is buried in there somewhere, though I'm feeling too lazy to do an image search and find it.
24 comments on original post
12
4
Christian Purnell's profile photoLizzard Smith's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Give Google’s DeepStereo algorithm two images of a scene and it will synthesize a third image from a different point of view.
View original post
8
2
Kyle Beck's profile photoJen Yoeng's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> “The odds that an open access journal is referenced on the English Wikipedia are 47% higher compared to closed access journals,” say Teplitskiy and co.

via +Lally Gartel​
The way scientific information diffuses through the knowledge economy is changing, and the first evidence from Wikipedia shows how.
20
5
P Thompson (Fidalgo)'s profile photoAndrew King (Science Student)'s profile photoClaudia W. Scholz's profile photoJames Salsman's profile photo
2 comments
 
AZV3z
[£
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// Over-generalization is a common and natural step in learning and development. My mother tells a story of my first encounter with a black man at the age of 2; he was a refrigerator repairman who came for a house visit. A short time later at a supermarket I encountered another black man, causing me to point and claim "refrigerator!" This behavior wasn't motivated by racism. It was simply a poor inference drawn from a limited data set.

With sufficient experience comes maturity, where one's data set has become robust against such errors. Racism in adults is offensive in small part because it reveals a mind that has not matured beyond such rudimentary ways of carving the world. An immature adult mind is almost impossible to change. 

But that's not what's going on with Google. The result is a mistake to be sure, a sign of immaturity, underdevelopment, and a poor training set. But there's no reason whatsoever to believe that these dispositions have congealed in Google to the point of no return. Quite the contrary, these small stumbles are just peaking at the vast possibilities artificial intelligence is starting to make available. It shows every indication of continued, rapid improvement; it's handlers show every recognition that this is a mistake that requires correction. No one would yell at a baby for stumbling after its first few steps; Google deserves the same support from us here. 
 
Racial Stereotypes, Machine Learning, and Facial Recognition

Ouch!  It appears that, for the second time in far too short a time, a photo app designed to use AI to label people and things has systematically mistaken black people for gorillas.  I wish I could say this was satire, or even a cruel joke, but it isn't.

While +Yonatan Zunger is right to say this is absolutely NOT OK, I believe that, to leave it at that, as an 'oopsie', would be to overlook a frightening possibility: that such mistakes are not coincidences that just happen to sound like racial stereotypes.

In other words, these AI's may be able to teach us about the same sorts of perceptual errors and biases in humans.  Note the eerie correspondences: black people mistaken for gorillas, a blink feature that thinks all Asian people are blinking, etc...  These don't just sound like our stereotypes: they are the same stereotypes.

Zunger's idea for a permanent solution, as well, points to the best remedy for these sorts of mistakes: to focus, not on differences (bigger or smaller lips, hips, eyes, etc...), but rather to focus on shared human features.  In no way, in reality, are blacks mistakable for gorillas or Asian eyes for blinking eyes...  unless one focuses entirely on the differences between people.  This, in reality, is how bigotry operates: by getting you to notice and exaggerate the differences, and ignore the commonalities.

Let us therefore not dismiss these things as 'bugs', but rather as potentially valuable insights into both the origin of stereotypes, and hopefully for a solution to these issues as well.
Flickr sparked some controversy back in May after it was discovered that the service's new autotagging feature was prone to mislabeling black people as "ap
150 comments on original post
15
3
R. Emery's profile photoWilliam Rutiser's profile photoElizabeth Whitmire's profile photoKashif Ansari's profile photo
16 comments
 
http://www.lovethispic.com/uploaded_images/99255-Fruit-Face.jpg
The problem is they're making it more complicated than it actually is
Add a comment...
Daniel's Collections
People
In his circles
1,608 people
Have him in circles
30,188 people
John Bubb's profile photo
John Lewis's profile photo
Robert Temple's profile photo
Michael Terrell's profile photo
Janelle Fortelny's profile photo
emo burgos's profile photo
Jim Lion's profile photo
Colin Cammarano's profile photo
Jon Houser's profile photo
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA
Story
Tagline
Robot. Made of smaller robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Digital Interface and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Heidegger, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 

______


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Work
Occupation
Internet
Basic Information
Gender
Male
Other names
eripsa
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.