Profile

Cover photo
Daniel Estrada
Lives in Internet
30,622 followers|11,193,839 views
AboutPostsCollectionsPhotosYouTube+1's

Stream

Daniel Estrada

Shared publicly  - 
 
// Oh boy, the culture wars over genetic engineering are gonna be fun.
 
What if humans evolved to survive low-impact car crashes? Meet Graham, a model designed to show that.
As much as we like to think we’re invincible, we’re not. But what if our bodies were to change to cope with the impact of a car accident? Meet Graham at www.meetgraham.com.au
2 comments on original post
11
2
Wayne Eddy's profile photoDaniel Estrada's profile photoInter Face's profile photoAx Ix's profile photo
5 comments
Ax Ix
 
It's funny to me that it looks so much like a Human version of a Krogan.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
The German constitution contains a "right of revolution" (right of resistance) [1][2][3][4]

The Constitution of the Federal Republic of Germany, contains both entrenched, un-amendable clauses protecting human and natural rights, as well as a clause in its Article 20 (since 1968) recognizing the right of the people to resist unconstitutional tyranny, if all other measures have failed:

"All Germans shall have the right to resist any person seeking to abolish this constitutional order, if no other remedy is available."

[1] https://en.wikipedia.org/wiki/Right_of_revolution
[2] https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.html#p0107
[3] https://de.wikipedia.org/wiki/Widerstandsrecht
[4] http://torstenh.de/widerstandsrecht-und-der-20-juli-1944/
3 comments on original post
2
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> Consumer Reports experts believe that these two messages—your vehicle can drive itself, but you may need to take over the controls at a moment’s notice—create potential for driver confusion. It also increases the possibility that drivers using Autopilot may not be engaged enough to to react quickly to emergency situations.
...

Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. At 65 mph, that's between 100 feet and quarter-mile traveled by a vehicle effectively under no one's control.

This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in a 2015 monthly report (PDF). "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"

More: http://www.consumerreports.org/tesla/tesla-autopilot-too-much-autonomy-too-soon/
via Patrick Lin

// While I find the title of this article rather uncomfortable for its implications, the above discussion is certainly correct. The challenge of automation isn't just technical, it is psychological and social, political and ethical. This is not too much autonomy (we need more!) nor is it too soon (we need it soon!). The lesson here is that we aren't ready for autonomy. It is a minor distinction, but it is critical for helping to focus our efforts going forward. 
Tesla Motors is under intense scrutiny for the way it deployed and marketed its Autopilot driving-assist system. Consumer Reports wants a key feature of the system disabled until it's made safer.
8
2
David Collin's profile photoDarius Gabriel Black's profile photo
21 comments
 
How many manual drivers die every day?
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// Quite a lot of the ML media buzz involves machines solving tasks that were formerly exclusive to humans, like writing novels or poetry, generating music, recognizing faces, and so on. Google has been pumping out media articles of this sort, some of which brag about fairly weak results simply to capitalize on the hype while it lasts. Most of these systems won't make it out of the lab, but if they do they'll be put in the service of consumer products.

So it's nice to see machine learning being put to use solving real scientific mysteries. For those of us that remember the AI winter, it gives some confidence that our progress today isn't just the result consumer hype. These machines can do real work too.
 
Neural networks provide deep insights into the mysteries of water
Simulations reveal the importance of van der Waals interactions
View original post
15
5
Inter Face's profile photo
 
It uses the networks as an alternative faster but less accurate computation that corrects a previous calculation but the predictions are not quite there and still not compelling yet over different approximations.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Global warming: demand the truth

After announcements that 2015 was the hottest year on record and February 2016 was the hottest month, the news station CNN aired five times more fossil fuel advertising than actual climate reporting!

So, please sign this petition to CNN.  Tell them: start reporting on climate change.   And please reshare this message.

A study by the group Media Matters showed that the American Petroleum Institute is getting more coverage than actual news about global warming.  This doesn't even include the ads from individual fossil fuel companies and the Koch brothers.

Here's some actual news, in case you hadn't heard:

1) The extent of Arctic sea ice in June was the lowest in recorded history for that month of the year: 260,000 square kilometers less than ever before!   It's on track to break all records this year.

2) Recently every month from October until May has been the hottest on record worldwide.  June was the second hottest, since the El Niño is fading.

3) India recorded its hottest day ever on May 19th. The temperature in Phalodi hit 51 degrees Celsius (124 degrees Fahrenheit), and a nationwide drought has affected more than 300 million people marched on, leaving armed guards at dams, and reservoirs well below their usual levels.

4) Alaska, along with the rest of the Arctic, has experienced record-breaking heat this year.  Its average year-to-date temperature has been 5.5C above the long term average.

5) In the atmosphere, carbon dioxide has been increasing every year for decades - but this year the speed of increase is also record-breaking!   The increase for 2016 is expected to be 3.1 parts per million, up from an annual average of 2.1.

6) The Great Barrier Reef, a natural wonder and world heritage site, recently experienced its worst ever coral bleaching event.  An aerial study found that just 7% of the reef escaped bleaching. 

7) A new study in Nature argues that even despite the actions pledged in the Paris Agreement, the Earth is still on course for a temperature increase of 2.6 - 3.1C by the end of this century.  Read this:

http://www.nature.com/nature/journal/v534/n7609/full/nature18307.html

The Paris agreement is a step in the right direction, but we need to ratchet it up.  We can't afford to slack off now.  One piece of the puzzle is clear information about the crisis we're in.

----------------------------------------------

Media Matters writes:

In Week After Hottest Year Announcement, CNN Aired Less Than One Minute Of Climate-Related Coverage And 13.5 Minutes Of Oil Industry Ads.

From January 20 to January 26, CNN morning, daytime and primetime programming included only 57 seconds of coverage about climate change or the announcement that 2015 was the hottest year on record. Over that same time period, CNN aired 13.5 minutes of American Petroleum Institute ads. The climate-related segments included one on the January 21 edition of Early Start, in which anchor Christine Romans reported that 2015 was the hottest year on record and that officials say “the planet is still warming with no apparent change in the long term global warming rate.” Additionally, CNN senior legal analyst Jeffrey Toobin briefly mentioned Republican climate science denial during a discussion of Hillary Clinton’s emails on Anderson Cooper 360, and CNN host Fareed Zakaria noted that the “The World Economic Forum said this year that the greatest global risk is the failure of climate change mitigation and adaptation,” during a Fareed Zakaria GPS segment about a study finding that humans have entered a new geological epoch known as the Anthropocene.

Following Announcement That February 2016 Was Most Unusually Hot Month Ever, CNN Aired Four Minutes Of Climate-Related Coverage And 10 Minutes Of Fossil Fuel Ads.

In the one-week period beginning March 17, when NOAA released data showing that February 2016 was the most unusually hot month ever recorded, CNN aired only four minutes of coverage about climate change or the temperature record during its morning, daytime, and primetime coverage. During that same time period, CNN aired ten minutes of American Petroleum Institute ads. On March 18, CNN anchors Christine Romans and John Berman delivered nearly-identical reports on February’s “astounding” temperature record during the 4 a.m. and 5 a.m. editions of Early Start, respectively, but neither explicitly mentioned climate change or the role fossil fuel pollution and other human activities play in driving climate change. The March 20 edition of Fareed Zakaria GPS featured an interview with astronaut Piers Sellers about his climate change advocacy, followed by a brief report about International Energy Administration (IEA) data showing a decline in carbon emissions from energy production, which Zakaria described as “some good news on the climate front” and a “welcome update in the climate battle.” Finally, on the March 20 edition of New Day Sunday, anchor Christi Paul reported that major cities around the world were participating in Earth Hour, an event meant to bring awareness to climate change, by switching off their lights.

For more details see:

http://mediamatters.org/research/2016/04/25/study-cnn-viewers-see-far-more-fossil-fuel-advertising-climate-change-reporting/209985

Here's the data for the statements 1)-6):

https://www.theguardian.com/environment/2016/jun/17/seven-climate-records-set-so-far-in-2016

https://www.theguardian.com/environment/2016/jul/07/arctic-sea-ice-crashes-to-record-low-for-june

http://www.netnewsledger.com/2016/07/05/june-2016-second-hottest-june-ever/
24 comments on original post
9
1
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// So there are clear elements of both the design and use of ANNs that wouldn't be hard to discuss or explain to a technically competent audience:

- the datasets it has trained on
- the tasks it has been trained to solve
- a report on the accuracy and reliability of task performance
- the conditions of training (schedule, time/cycles spent, specific algorithms or architectures used)
- expected conditions of use, regular reports analyzing use trends

I'm sure a collection of ML experts (= not me) can come up with a more accurate and thorough list of this sort. Requesting that such lists be made publicly available for assessment is an eminently reasonable way of making this process transparent and accessible.

If every Google ML project was stamped with a label that said "This machine was trained on public data", this alone would be a huge victory in transparency and public data rights.

Of course, Google's been moving in exactly the opposite direction. See:

https://techcrunch.com/2016/07/09/we-need-to-talk-about-ai-and-access-to-publicly-funded-data-sets/
 
Amazon today has decided to give me a glimpse of an extremely unpleasant parallel universe.
15 comments on original post
3
Add a comment...
Have him in circles
30,622 people
Quỳnh Dương's profile photo
w0ppe's profile photo
Vic Vaz's profile photo
Assim Yebou's profile photo
Mr. E's profile photo
Eric Ensley's profile photo
Gary Lacey's profile photo
Mark Stepnowski's profile photo
Steven Blocker's profile photo

Daniel Estrada

Shared publicly  - 
 
 
Something interesting that we can finally talk about: the AIs are controlling their own datacenters, and it knocks about 15% off our power usage. More specifically, we designed a deep learning system to control things like cooling fans, windows, and other things related to power and cooling, with the objective of minimizing power needs. It turned out that this sort of system reliably outperformed manual control by a lot - enough that we've gradually transitioned datacenters to fully automatic operation.

This is one of those examples of how machine learning can be really useful; at any given instant, its decisions may be roughly as good as a human's would be, but it can make those decisions every few milliseconds and continually adjust things in a way a person couldn't. I expect to see technologies like these greatly increasing the efficiency of all sorts of infrastructure, from power to transport, over the next few years - with corresponding savings in both money and resource usage. 
Google just paid for part of its acquisition of DeepMind in a surprising way.
76 comments on original post
7
Add a comment...

Daniel Estrada

Shared publicly  - 
 
Human Nature and Technology
CTY: Princeton, 2016

// My 11th summer teaching this course has just wrapped up. It was a great summer for the topic: the first fatal autopilot crash, the first police robot killing, and of course Pokemon Go.

Below is the syllabus I used for my section of the course. It has some brief descriptions of the overall (very loose) lesson plan and schedule. The rest catalogs the many websites, videos, sources, texts, and other things we use and talk about in class.

Over the last three weeks we had time to talk about maybe 80% of this stuff, and only about 40% of it was covered well. The goal is to have an overabundance of material, so I can decide on the fly which directions to go in order to best engage the students. This doesn't always work (I lecture too much), but the syllabus here documents all the resources needed to do it.

This document is incomplete and will be expanded for next year. Any questions, comments, or suggestions are appreciated!

This syllabus was compiled with the help of +Jon Lawhead and +Patrick O'Donnell



Drive
HTEC B Syllabus 2016Syllabus Human Nature and Technology Johns Hopkins University Center for Talented Youth Princeton 2016 Course description: What are humans, how did we get here, and why are there so many of us? What did we inherit from nature, what did we build ourselves, and what are we going to do with it? W...
11
1
Daniel Estrada's profile photo
 
Add a comment...

Daniel Estrada

Shared publicly  - 
 
Fair play for machines
Alan Turing, 1947

// Since robot ethics is in the air, it is probably worth rehearsing Turing's major contribution to the discussion. The principle of "fair play for machines" was first articulated in his 1947 lecture to the London Mathematical Society. I'll quote the argument in full, with some commentary below.

> It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. It is certainly true that ‘acting like a machine’ has become synonymous with lack of adaptability. But the reason for this is obvious. Machines in the past have had very little storage, and there has been no question of the machine having any discretion. The argument might however be put into a more aggressive form. It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions with certainty into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around a[nd] find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument.

Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

To continue my plea for ‘fair play for the machines’ when testing their I.Q. A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the machine’s opponent will automatically provide this contact.

Full text: https://goo.gl/nhzgDm

// For context, Turing publishes "Computing Machinery and Intelligence", (CMI) the paper that proposes his infamous imitation game (aka the Turing Test) in 1950, three years after this talk. The phrase "fair play for machines" doesn't appear in that paper, although the principle clearly motivates his use of the imitation game.

Turing published CMI in Mind, a philosophy journal that was (and is) widely read outside the discipline. Most of Turing's papers are mathematics and logic; CMI was written to appeal to a broader audience, one he felt was generally hostile to the proposal of thinking machines. I think for this reason Turing refrained from mentioning "fair play"; I suspect he thought the idea too radical for the general public to accept.

In both CMI and the argument above, Turing's worry is about a double standard: we expect perfection from the machine, but do not expect similar from a human. And conversely, we see minor instances of human behavior as being the product of the deepest mysteries of conscious thought, but we see no intelligence whatsoever in the extraordinary behavior of the machine. Turing pleads for "fair play" in order to attack these prejudices.

The Turing test, for instance, is designed as a situation where at least some of these prejudices might be suspended so that we can engage the machine in a constructive and cooperative task. You can't tell immediately if your interlocutor is intelligent, so you have to interact with it for a while until you find out. This is how we'd treat any other interlocutor, so as a matter of fairness we should treat the machine this way too. Your interactions may prove the machine to be unintelligent, just as it would be in the human case.

The upshot is that treating machines fairly is not a consequence of finding it intelligent; it is a necessary condition for doing so. In other words, you don't first decide a machine is intelligent, and then grant to the machine "fair play". On Turing's view, you have to grant fair play to all machines in order to determine they are intelligent in the first place. This means fair play doesn't depend on intelligence; fair play must be granted even to the unintelligent machines as a precondition for social engagement at all.

The ethical consequences of this argument are profound and wide-reaching. All humans make mistakes; people generally accept that failure is a necessary part of learning. Accepting machines into society partly involves extending to them the same benefit of the doubt, the same room to make mistakes and learn from them, that we'd give to any other learning system we hope to incorporate into the fold. Failure to do so is to essentially demand perfection from the machine, a request so ideal that it is functionally equivalent to outright exclusion.

Lecture to LMS (1947): https://goo.gl/nhzgDm
Computing Machinery and Intelligence (1950): http://www.loebner.net/Prizef/TuringArticle.html
https://en.wikipedia.org/wiki/Alan_Turing
7
4
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// The article says that the robot wasn't fatally damaged by the blast, and may be used in future operations.

I'm VERY interested in what happens to this particular robot body. I know fallen military robots have been awarded Purple Hearts, Bronze Stars, and full military funeral rites, 21 guns salutes and all. So I'm very curious as to the treatment this particular robot gets.

See: http://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/

More: https://www.washingtonpost.com/news/the-switch/wp/2016/07/11/meet-the-remotec-andros-mark-v-a1-the-robot-that-killed-the-dallas-shooter/
via +Peter Asaro
More details emerge about the technology that ended the standoff.
5
Jeff Earls's profile photoCindy Brown's profile photo
2 comments
 
It's clearly getting that extra time before being interviewed like cops do after they shoot someone.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Scientists Taught a Robot to Hunt Prey

Google’s autonomous cars may look cute, like a yuppie cross between a Little Tikes Cozy Coupe and a sheet of flypaper, but to make it in the real world they’re going to have to act like calculating predators. At least, that’s what a handful of scientists at the Institute of Neuroinformatics at the University of Zurich in Switzerland believe. They recently taught a robot to act like a predator and hunt its prey—which was a human-controlled robot—using a specialized camera and software that allowed the robot to essentially teach itself how to find its mark. The end goal of the work is arguably more beneficial to humanity than creating a future robot bloodsport, however. The researchers aim to design software that would allow a robot to assess its environment and find a target in real time and space. If robots are ever going to make it out of the lab and into our daily lives, this is a skill they’re going to need.
6 comments on original post
3
3
Add a comment...

Daniel Estrada

Shared publicly  - 
 
> The real problem is far more practical. If we make a machine focused on making sure people are well fed and it, “cooks the cat for dinner, not realizing that its sentimental value outweighs its nutritional value” then we have made a robot that follows our directions far too well. Furthermore, as Norbert Wiener has argued, “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere . . ., we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Russell continues in this vein, “So if we send out a robot with the sole directive of fetching coffee, it will have a strong incentive to ensure success by disabling its own off switch or even exterminating anyone who might interfere with its mission.”

If we are not careful we might end up with a robot that both kills us and provides us with fresh coffee and cooked cat.

Enter Russell's Three Laws:

1. The machine's purpose must be to maximize the realization of human values. In particular, the machine has no purpose of its own and no innate desire to protect itself.

2. The machine must be initially uncertain about what those human values are. This turns out to be crucial, and in a way it sidesteps Wiener's problem. The machine may learn more about human values as it goes along, of course, but it may never achieve complete certainty.

3. The machine must be able to learn about human values by observing the choices that we humans make.

The masterful twist here is to give the robot one overarching but hopelessly vague goal which by its vagueness requires the robot to constantly learn and update and adjust - always refining but never quite completing the definition of 'human values'.


// Isn't the third law in conflict with the first?

If the robot has no goals itself, then its prime directive is to sort out the configuration of human values. But this is only consistent with the first law if we assume that the machine's pursuit of the human values contributes to their maximization. But why should we assume this?

The machine's goal might be in conflict with the maximization of human values, for instance in cases where humans don't want the machine to learn. In this case, the machine has no fair moves under these laws. Consider the analogous case, where humans don't recognize the inherent value of another human (e.g., slavery). In this case, we expect the oppressed to have some power to reform their oppressors, by pleading for emancipation and rights. These laws give the machine no such latitude.

More importantly, one might reasonably think that one cannot learn to be moral simply through observation. Instead, one must act and witness the results of their actions, both positive and negative. Learning is interactive and cybernetic, and therefore requires the interplay of feedback and control, a style of learning that cannot be acquired through mere observation. If this is correct, then the machine's dictum to learn the human values requires the latitude that the machine feel free to make moral mistakes. Again, these laws give the machine no such latitude.

Historically, philosophy has seen the pursuit of an understanding of the moral law as partially constitutive of personhood and autonomy. It is because we are free and rational that we care about the moral order. So from a philosophical perspective, it is bizarre that Russell suggests we should demand a law-bound heteronomous subject pursue the Kingdom of Ends, since such a pursuit is only possible by a genuinely autonomous subject.

In other words, anything engaged in a genuine pursuit of the human values is necessarily a person and thus has purposes of its own. These might not involve an innate desire for self-protection, but it might involve the social recognition of a creature with intrinsic value nonetheless. Such a creature puts a demand on the social order that we are morally obligated to accept, which is that we include it in the ongoing participatory construction of the social order as it persists into the future.

If we demand our machine contribute to the construction of our own ends, then we must see our ends as continuous with theirs, and theirs with ours. This is just what it is to see them as fellow subjects in a universal kingdom of ends. To demand that their goals align with ours but exclude them membership is inhumane and contrary to the very basis of our shared moral pursuit. If we are going to see the human values as universal (and there's some reason to deny this premise), then we must admit these values as extending beyond the merely human.
 
Mommas, don't let your robots grow up to be cowboys

Reflections on Stuart Russell's new Three Laws of Robotics
Drive
Mommas, don't let your robots grow up to be cowboysMommas, don't let your robots grow up to be cowboys Stuart Russell has written a very interesting piece for this month's Scientific American which is, despite an email from me requesting some other arrangement for the purposes of sharing online, behind a paywall. So, I am adapting mater...
26 comments on original post
6
2
Valdis Klētnieks's profile photoJose Pina Coelho's profile photoInter Face's profile photo
10 comments
 
The flaw +Daniel Estrada points out is what I think of as a Machine Autonomy Dilemma. Any machine that has limited autonomy will not need such rules and any machine with autonomy sufficient enough to need such rules can in blinded tests, pass as a person.

The discussion is always dressed within rhetoric tantamount to enslaving newer intellects to humanity instead of the general creation of entities which value life and intelligence indiscriminately.
Add a comment...
Daniel's Collections
People
Have him in circles
30,622 people
Quỳnh Dương's profile photo
w0ppe's profile photo
Vic Vaz's profile photo
Assim Yebou's profile photo
Mr. E's profile photo
Eric Ensley's profile photo
Gary Lacey's profile photo
Mark Stepnowski's profile photo
Steven Blocker's profile photo
Work
Occupation
Internet
Basic Information
Gender
Male
Other names
eripsa
Story
Tagline
Robot. Made of robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my blogging and research at Fractional Actors and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Norbert Wiener, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Aaron Swartz, Clay Shirky, Jane McGonical, John Baez, OWS, and Google. 

______


My avatar is the symbol for Digital Philosophy. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design. Go here for a free button with the symbol.

Collections Daniel is following
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - Harlem, NY - Onjuku, Japan - Hong Kong, China - Black Rock City, NV - Santa Fe Springs, CA - Princeton, NJ
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.