Profile

Cover photo
Daniel Estrada
Lives in Internet
30,624 followers|10,851,327 views
AboutPostsCollectionsPhotosYouTube+1's

Stream

Daniel Estrada

Shared publicly  - 
 
 
"Self Racing Cars is a new race series started by technology entrepreneur Joshua Schachter as a way for companies and hobbyists to test their autonomous vehicles and learn from each other. There are no rules, and there is no qualifying -- anyone with a autonomous car or autonomous vehicle technology can apply to participate at the events currently being held at Thunderhill Raceway in Willows, Calif. That means that even if it's just a Go-Kart, as long as it doesn't have a driver you can race it on the track."
If Roborace will be the Formula 1 of autonomous electric car racing, then "Self Racing Cars" is the Sports Car Club of America (SCCA). At least, that's the plan for the new driverless car series holding its first "track days" this weekend.
1 comment on original post
3
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Motion AI lets anyone easily build a bot

Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing. What, another bot-builder? Dozens of other startups have launched to build bots for developers, especially after Facebook kicked off a bot craze last month that now has tens of thousands of developers building bots on Facebook Messenger alone. But Motion AI stands out because it hand-holds you through building every aspect of the bot’s flow, including deployment across most of the bot platforms (Facebook, Slack, SMS, email, Web and so on). Moreover, it has created what it calls bot “modules,” which package up the logic required for building particular bot features. This saves novices — and even experienced developers — multiple steps. These modules will soon be featured in a store (to be launched in about a month) where customers can take whatever module they need as they put together their bots, according to founder and chief executive David Nelson.
Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing.
1 comment on original post
10
4
Kenneth Wong's profile photoSowmyan Tirumurti's profile photo
2 comments
 
So just as we have an IVR system responding to us for voice calls, we will now have a chat-bot responding to a chat session! Companies think machines are better than humans in responding to other humans. Mmm... what are the other ways we can disrupt human to human interaction? 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Practopoiesis is a theory on how life organizes, including the organization of a mind. It proposes the principles by which adaptive systems function. One the same theory covers the life and the mind. It is a general theory of what it takes to be biologically intelligent. Being general, the theory is applicable to the brain as much as it is applicable to artificial intelligence (AI) technologies (see AI-Kindergarten.). What makes the theory so general is that it is grounded in the principles of cybernetics, rather than describing the physiological implementations of those mechanisms (inhibition/excitation, plasticity, etc.).

The most important presumption about the brain that practopoietic theory challenges is the generally accepted idea that the dynamics of a neural network (with its excitatory and inhibitory mechanisms) is sufficient to implement a mind. Practopoiesis tells us that this is not enough. Something is missing. Practopoiesis also offers answers on what is missing, both theoretically and in a form of a concrete implementation. The theoretical answer is in T3-systems and the processes of anapoiesis. The concrete implementation in the brain is based on the neural adaptation mechanisms. These mechanisms enable us to adaptively deal with the context within which we have to operate and thus, to be intelligent.

The main contribution to the mind-body problem: Practopoiesis suggests that we should think about mind differently from how we are used to. According to T3-theory, the mind cannot be implemented by a classical computation, which consists of symbol manipulation within a closed system (a “boxed” computation machine). Rather, a mind i.e., a thought, is a process of an adaptation to organism’s environment. This requires the “computation” system to be open and to interact with the environment while a thought or a percept is evolving. The reason why we are conscious and machines are not, is that our minds are interacting with the surrounding world while undergoing the process of thought, and machines are not — machines recode inputs into symbols and then juggle the symbols internally, devoid of any further influences from the outside.

More: http://www.danko-nikolic.com/practopoiesis/
via +Jon Lawhead

// This view seems promising. I think it lays on the anti-representational arguments too strong, perhaps because I've been defending representationalism recently. I think the right view will be one that describes a cybernetic control system with robust representational resources at its disposal. I think it's correct to say that the brain isn't fundamentally a representational system. The nervous system is fundamentally a system for coordinating action. But in so coordinating, it really can juggle "internal" representations around and inspect them. In Kinds of Minds, Dennett calls these the Popperian creatures after Karl Popper, who said that such thinking "permits our hypotheses to die in our stead."

My instructor from grad school Dr. Brewer once gave the following argument for internal representations, which I still find completely convincing. Here's the challenge: close your eyes and tell me how many windows are in your parent's living room. It's unlikely that you've thought of this question explicitly; if you can generate an answer, it's likely that you're conjuring an internal model of the room, and counting the windows on that model. That's exactly a case of "juggling symbols internally". Our capacity to reason about such mental models was the subject of my advisor's 2006 book, Models and Cognition:

https://drive.google.com/file/d/0B4me4PbBMBmOVXM3NjJ3aGljX1U/view?usp=sharing

But my sense is that practopoeisis can deal with such cases fairly comfortably. Still, I want to deal with the last claim in the quote above, about the differences between us and machines. He's right, in some sense, that neurons are sensitive to much more of the nearby activity than electrical circuits. But it's worth saying that quite a lot of the technical challenge in building microchips is in keeping the signals clear and distinct. The reason microchips can process at such high speeds is because we can keep these signals reliably clear at very small scales. In other words, this isn't an inevitably feature of our machines; it has been an explicit design goal.

From this perspective, it's worth mentioning that brains also do quite a lot of work insulating the signals from surrounding neurons. There are lots of neurons passing through any given space (see: http://goo.gl/jr2dHA), but only a few neurons are actually talking to each other. The rest are insulated from the signal by other types of neural cells. In other words, neural signals aren't entirely open to influence from the outside. But certainly they are tolerant of more influence than a microchip.

The other place to object, though, is the extent to which computation happens indpenedent of outside influence. When I'm playing a video game, for instance, there's certainly a lot of processing happening "under the hood" of the machine, but there's also a lot of sensitivity to my behavior and interaction with that system, and so there's a lot of interdependence and interaction between the player and the computer. At a very low level the computation isn't interactive, but at the level of the game itself, the machine is interactive nearly to the point of immersion. And even that's not strictly true! Modern video cards will only render from the perspective of the player, which means that in a very direct way, the computations performed are tightly linked to what the player is doing, with very rapid response times. To describe such machines as "devoid of influence from the outside" seems strange from this perspective.

What a lot of the criticisms of AI I've been ranting against fail to appreciate is the basic issue of multiple realizability, that the same high level process might infact be constituted by many different kinds of low level processes, each of which produces a functional analog at the higher level by different means. There's nothing about silicon that fundamentally prevents it from being interactive in the appropriate ways. The myth that biology is fundamentally "alive" while electronics are "inert" must be resisted wherever it appears.

But I'm ranting, and none of these issues seem like big problems for practopoiesis; in fact, I'd expect the author to agree with most of what I've said. Some of these articles seem rather new, within the last year. I'll be interested to see what the scholarly response is. 
Practopoiesis is a theory on how life organizes, including the organization of a mind. It proposes the principles by which adaptive systems function. One the same theory covers the life and the mind. It is a general theory of what it takes to be biologically intelligent.
13
2
Abe Pectol's profile photoDanko Nikolic's profile photoDaniel Estrada's profile photo
8 comments
 
+Danko Nikolic Thank you for stimulating our thoughts!
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Our humanoid robot, the iCub (I as in “I robot”, Cub as in the man-cub from Kipling’s Jungle Book), has been specifically designed to support research in embodied artificial intelligence (AI). At 104 cm tall, the iCub has the size of a five-year-old child. It can crawl on all fours, walk and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/LGPL licenses and can now count on a worldwide community of enthusiastic developers. More than 30 robots have been built so far which are available in laboratories in Europe, US, Korea and Japan (see http://www.iCub.org). It is one of the few platforms in the world with a sensitive full-body skin to deal with safe physical interaction with the environment.

https://www.youtube.com/watch?v=pNIvdmJUlVE
via +Boing Boing http://boingboing.net/2016/05/23/gaze-controller-for-humanoid-r.html
10
3
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Q: Do you dare predict a timeline for that?
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years.
...

Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed?
A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.

Google was then the first to use their engineering to get it into their products and in 2012, it came out in the Android, and made the speech recognition in the Android work much better than before: It reduced the word-error rate to about 26 per cent. Then, in 2012, students in my lab took that technology that had been developed by other people, and developed even further, and while the existing technology was getting 26 per cent errors, and we got 16 per cent errors. In the years after we did that, people said, ‘Wow, this really works.’ They were very skeptical for many many years, they published papers dismissing it. Over the next years, they all switched to it.


// For anyone confused by +Luciano Floridi's article from last week (https://goo.gl/Q3OU7Y):

There's an important distinction between the proponents of AI and the Singularitarians, and +Geoffrey Hinton does an excellent job here of representing this space. For Hinton, there's no question of a machine's thinking (or believing, deciding, imagining, etc; see his classic 2007 Google talk, esp. ~24:00 https://goo.gl/qdZviJ). We really can build computers that do all those things to demonstrable effect. And yet Hinton's optimism about AI doesn't preclude any realism about the gap between computers and humans: he says brains operate at roughly a million times the capacity of our best artificial neural nets today, with a hundred thousand times less power consumption.

Those are big numbers, for sure, but the underlying point is crystal clear: that these are differences of scale which pose an engineering challenge, not ones of essence that pose an affront to logical necessity. And given that we've experienced a trillion-fold increase in computing power over the last 60 years (http://goo.gl/mLMZ4p), it's hard to interpret these numbers as impossible to overcome. It probably won't happen in the next five years, and as Hinton rightly notes, who knows what will happen beyond that. The future success of AI is not fated to happen. But we can still heap criticism on those who claim, as Floridi does, that "No conscious, intelligent entity is going to emerge from a Turing Machine." Such views have no place in contemporary discussions of AI.

I quoted a piece in the interview crediting the success of commercial AI applications as part of what brought AI out of its winter. It is interesting to think about the socioeconomic viability of AI as critical to its development. It suggests that if AI does not close these large gaps in this iteration, there will be not just technical but also socioeconomic reasons for the failure. The first AI winter was also due to a lack of funding, but that was for academic research and not for smartphone apps that everyone uses.

https://en.wikipedia.org/wiki/AI_winter
 
The meaning of AlphaGo, the AI program that beat a Go champ

Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI

Q: Beyond games, then—what might come next for AI?
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.

Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding?
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.

Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI
7 comments on original post
16
4
Daniel Estrada's profile photoDeen Abiola's profile photoJohn Lewis's profile photo
11 comments
 
+Deen Abiola Agreed. However one would think, erroneously, that a person could dedicate more than one millionth of their own neurons to the study and analysis of a single subject without obstruction.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> The faulty logic of the IP [information processing] metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

// Here's another shockingly bad popular article from a distinguished scholar who ought to know better. Like the Floridi article I criticized a few days ago (https://goo.gl/ZSGGrt), these criticisms aren't just differences of opinion. Both authors make elementary mistakes explaining the theory they purport to critique, mistakes that would be embarrassing in an undergraduate cognitive science course. The above quoted argument is a complete strawman of the view he rejects, and the implications he draws from it are utterly ridiculous. These mistakes do nothing more than reveal their authors to be either shameless partisans or astonishingly ignorant of the subjects about which they are supposed to be experts.

Epstein's view is that the information processing metaphor, where we talk about how brains "store" "representations" in "memory" for "processing" and the like, is simply a bad metaphor taken from computer science (the fanciest machines of our day) and grafted awkwardly to explain how the brain works. Epstein insists that this metaphor does not fit the brain at all, and that the whole IP metaphor should be abandoned, as it has become an impediment to a better theories in psychology. He says:

> Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

// To be clear, he's not just criticizing popular discussions of these topics on blogs and in magazines for being fast and loose with the IP metaphor. No, he's criticizing long established theories in psychology and cognitive science describing the information processing architecture of the brain. In other words, he's critiquing the use of these terms and theories in their full technical sense. He's arguing that information processing as a research paradigm in cognitive science is mistaken, and that decades of theoretical work should be abandoned.

But it is clear from this article that Epstein doesn't understand the first thing about information processing, computer science, or cognitive science. His arguments here are shockingly bad, completely misrepresenting these fields. Fortunately, I don't have to make these arguments myself; the internet has generated some decent backlash against the article, and several authors have come forward with point-by-point rebuttals. I'll document some below:



> Dr. Epstein made it very clear that he doesn’t understand computers.
https://sergiograziosi.wordpress.com/2016/05/22/robert-epsteins-empty-essay/

> Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn't) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).
http://recursed.blogspot.com/2016/05/yes-your-brain-certainly-is-computer.html?m=0

> Nowhere does the IP thesis assert that we represent the world in specific, byte-like patterns. ‘Information’ doesn’t just mean ‘things that are coded into bytes.’
http://lukependergrass.work/blog/the-information-processing-brain

// In fact, the comments section on Aeon for this article is quite good, with representatives of both the representational and anti-representationalist (embodied) camp offering more cogent arguments for their positions. Both sides of this argument seem to agree that Epstein's article is both disappointing and completely misrepresents the field and the debate.

https://aeon.co/conversations/does-the-idea-that-your-brain-is-an-organ-responding-to-stimuli-change-your-sense-of-self

// If anyone finds more good responses to this article, I'd love to collect them here. I've seen this article shared many times, and it would be nice to point people somewhere for a comprehensive response.


https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
via +Massimo Pigliucci, who says "I think the article is right on target." lol
https://plus.google.com/111907992359490335188/posts/4F9SEpLTBNZ
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
21
3
Pawel Pachniewski's profile photoInter Face's profile photo
31 comments
 
what is hypercomputation

Key Phrases physical value, hypercomputation

Top Sentence:Quantum Hypercomputation—Hype or Computation

A real computer (a sort of idealized analog computer ) can perform hypercomputation [4] if physics admits general real variables (not just computable reals ), and these are in some way "harnessable" for useful (rather than random) computation.

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable . "Three counterexamples refuting Kieu's plan for "quantum adiabatic hypercomputation"; and some uncomputable quantum mechanical tasks".

Mike Stannett , The case for hypercomputation , Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 8–24, Special Issue on Hypercomputation

, Quantum Hypercomputation—Hype or Computation?

Sentence ratio: 0.16, Word ratio: 0.35. Summary Count 97.0, Orig count 278.0

Bigrams: Applied Mathematics, higher level, infinite computation
Sentiment: more negative than positive but mostly neutral
Style: Mostly opinionated

(19) It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation. [14] According to a 1992 paper, [15] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole [16] could theoretically perform non-Turing computations. [17] [18] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable. [19] [20]

58.0
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Ray Kurzweil is building a chatbot for Google

It's based on a novel he wrote, and will be released later this year. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."
Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. These days he is probably best known as a prophet of The...
View original post
16
6
Add a comment...

Daniel Estrada

Shared publicly  - 
13
1
Bill Trowbridge's profile photoWerner Van Belle's profile photoDaniel Estrada's profile photoJon Lawhead's profile photo
18 comments
 
+Daniel Estrada "I didn't start off being an ass."

And so another autobiography chapter title is born. 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Robotic systems that adapt and learn, and robots with knives, what could possibly go wrong?

Imagine for a moment the following commands.
1) take this knife.
2) chop all these "5" cookies into little pieces.
3) complete this task in as short a period of time as you can.

Enter the lab assistant, who takes one of the cookies and eats it ... those commands and that action are a combo waiting to go seriously wrong.

To misquote someone else, "I am not afraid of smart AI, I am afraid of the really stupid ones".

Herb Mugface - YouTube Channel (Herb is the robot below)
https://www.youtube.com/channel/UCv0BqZMqV5xNa5JOkibxOpw
"We never taught it to do that," says one researcher.
7 comments on original post
8
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> The appeal of risk scores is obvious: The United States locks up far more people than any other country, a disproportionate number of them black. For more than two centuries, the key decisions in the legal process, from pretrial release to sentencing to parole, have been in the hands of human beings guided by their instincts and personal biases.

If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it’s wrong in one direction, a dangerous criminal could go free. If it’s wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.

The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.

But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”

Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
via Randall Villarreal



There’s software used across the country to predict future criminals. And it’s biased against blacks.
4
1
James Salsman's profile photoGouthum Karadi's profile photo
2 comments
 
That's because blacks commit a disproportionate amount of crime.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Google is working on a project that will test the artistic ability of AI

Google is working on a new project to determine if artificial intelligence can ever be truly creative. The project, called Magenta, was unveiled this weekend at Moogest in Durham, North Carolina, Quartz reports. During a presentation at Moogfest, Google Brain researcher Douglas Eck said the goal of the new group is to determine if AI is capable of creating original music and visual art somewhat independently of humans. The Magenta team will use Google's open-source machine learning software TensorFlow, and try to "train" AI to make art. They'll create tools (and make them available to the public), like one that helps researchers import data from MIDI files into TensorFlow, Quartz reports.
Google is working on a new project to determine if artificial intelligence can ever be truly creative. The project, called Magenta, was unveiled this weekend at Moogest in Durham, North Carolina, Q...
3 comments on original post
1
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// It's like softcore How2Basic.
 
Meet Stephanie Sarley, an artist who fingers fruit on Instagram. In this interview, she talks about how she came up with the concept, people's response to her work, social network censorship, copyright infringement, and the tragedy of becoming a meme.
5 comments on original post
5
1
Add a comment...
Daniel's Collections
People
Have him in circles
30,624 people
Yuniors Motaas's profile photo
Jackey Singh's profile photo
Wayou Liu's profile photo
Rabid Monkey's profile photo
Adrienne Stapleton's profile photo
Steven Blocker's profile photo
Madhusudana S (Mr MS)'s profile photo
Adam Saleh (Adamaccelerated)'s profile photo
Dave Mongar's profile photo
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA
Story
Tagline
Robot. Made of smaller robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Fractional Actors and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Norbert Wiener, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 

______


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Work
Occupation
Internet
Basic Information
Gender
Male
Other names
eripsa
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.