Profile

Cover photo
Daniel Estrada
Attends University of Illinois at Urbana-Champaign
Lives in Internet
28,906 followers|4,066,174 views
AboutPostsPhotosYouTube+1's

Stream

 
 
This is a pretty broad overview, but I thought the ideas were quite coherent considering the proposed definitions and interesting in their potential link with ToM and (infant) development. Also Miachel Graziano states that this model of awareness is feasible in computer systems. It reminds me a lot of early work in giving computational systems representations of their own internal state that allow inference, and work in robotics on self-modelling.
5
1
Matt Uebel's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
// A thread from a few weeks back (http://goo.gl/I6OXRN) developed into an interesting discussion on technology and autonomy with +Ben Bogart, who knows what he's talking about and has basically the opposite view from mine. Consolidating the relevant comments below to include a broader audience, hopefully continuing the discussion here.

Ben writes: 

> +Daniel Estrada The reason I find it important to distinguish between human artifacts and life is the apparent ongoing re-framing as technology as "other".

Technology is increasingly (especially in the general public and by lay-people) seen as an independent and autonomous pseudo natural force, thanks to books like "What Technology Wants". I think this notion cuts against human agency in relation to our technological future; If we see technology as a natural force, then we have little impact on the ethics and shape of it. It accelerating and taking us over is an obvious extension of that mind-set.

Technology is not "other", it is an extension of us, its embedded in human culture and the result of human intention. To see it as anything else is abdicating responsibility for what it does to us and our social systems. (Did you see my TEDx talk?)

So under this position, I'm weary of any erosion of language that makes the distinction between life and technology.

// I respond:

> +Ben Bogart I'm very sympathetic to the idea that technology is an extension of human activity. I'm especially partial to Andy Clark's extended mind thesis (http://goo.gl/rcnnJx) and the relationship between our tools and our selves. So I agree, in general, that technology is an extension of human activity. 

But I think it is vitally important that this is not understood as a limitation on technological autonomy. I think artifacts can be autonomous in the relevant sense while simultaneously being a tool of human activity. In fact, defending the possibility of machine autonomy in the face of anthropological considerations about technology is the focal subject of my dissertation (which I'll be defending in about 2 months time =)

The big-picture outline of my argument can be found here: http://goo.gl/KvXQqY

But for a tl;dr version of the argument, it's enough to consider that I am an autonomous agent, and I am also an extension of my parents and family, and of the broader community in which I developed. I was, in a very important sense, "created" and "designed" by these communities, and there's an important sense in which my behavior reflects on these communities. So they are justified, for instance, in being proud of my successes and disappointed in my failures and otherwise taking some degree of responsibility in evaluating the things I do. 

At the same time, however, I'm an autonomous agent, responsible for my own successes and failures, and capable of acting in the world as an independent and self-constituting being. My deep causal and ethical ties to my communities of origin don't determine my behavior and don't deprive me of autonomy. I am simultaneously an extension of these communities and an autonomous agent, and reconciling these apparently conflicting aspects of my identity is, more or less, what self-consciousness is built to do. 

For the very same reason, the machines we create and deploy into the world are capable of autonomous activity and self-constituting action, and therefore might deserve serious treatment as autonomous agents, all despite the fact that their creation and design is also a product of human activity. This is not a contradiction in terms, as you seem to think it is. This is part of the condition of organisms embedded in complex ecological and social environment. And it is a feature that applies to organisms of all sorts, including the nonbiological ones. 

I think this is a pretty deep and important disagreement: you're insisting that we view technological change as nonnatural and under our control, and I agreeing with Rathenau: “Mechanization is not the result of free, conscious deliberation, expressing mankind’s ethical will. Rather it grew without being intended, or indeed even noticed. In spite of its rational and casuistic structure, it is a dumb process of nature, not one originating from choice.” http://goo.gl/x5yKu5

This is one of my favorite issues in the philosophy of technology, and one that I've done a lot of research on. So I'm happy to keep engaging someone who knows the issue well and falls on the other side =) 

// Ben:

>  
+Daniel Estrada Good luck on the dissertation, perhaps I will wait to respond seriously when its finished. :) I, myself, am aiming to defend at the end of the summer.

1. I agree that human agency is not "free" (ie is constrained), but all (living) agency is constrained by the environment.

2. I agree that autonomous technologies could exist, I just think they would be not be composed of electromechanical parts.

3. I think there is a problem with the notion of autonomy in a technology because a technology is developed to serve particular human utility. There is little value (beyond the philosophical enquiry of autonomy) in a technology whose autonomy does anything beyond satisfying that utility.

The utility of life is to propagate itself. There is no over-arching context of intention that specifies the utility of life. That does not mean that life is not exploited for the utility of another organism, but the difference is in the process of specification and design vs search and exploitation. I'll end with one thought:

What approach (in the following simplistic dichotomy) is better to encourage individual and social responsibility in making the world a better place? (a) Emphasize our power as citizens in technological development through technical literacy, buying habits, usage, and political processes to intentionally push technologies in the directions that is best for us, or (b) emphasize the autonomy and power of technology and technologists to solve our problems so we don't have to make the hard choices and be responsible for the effects of our consumption and expansionist oriented behaviour?

For me, the answer is a strong (a).

// My most recent response: 

> +Ben Bogart There's a lot of interesting things going on here that are worth picking up and running with. I want to emphasize again how sympathetic I am to the pragmatism of your approach. 

To keep things focused, though, I'm going to take issue with this claim in particular:

> because a technology is developed to serve particular human utility

I think this is ambiguous between a few different claims, not all of which are true. I think this ambiguity is a central to much of the confusion over technology, so I want to be careful treating the claim. 

We can first talk about the utility anticipated in the design of an object, and the utility actually realized in specific instances of use, and moreover that the two might diverge arbitrarily. I can use objects in ways utterly foreign to their intended functions, and I can use objects that I find laying around that are entirely natural and without any "intended" functions at all. Moreover, I can design objects to be used in ways that they never are (the chair in a design museum that has never been used for sitting, for instance). 

History is riddled with instances of technologies whose surprising growth was unexpected even by its designers ("I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943). I think more interestingly, cognitive science is full of studies that show the spontaneous activity of discovering new tools in the environment through play. (see: http://goo.gl/YP4ln9)

This research seems to undermine the apparently clear thesis driving your views: "The utility of life is to propagate itself." The claim isn't so much wrong as ambiguous: what "self" is being propagated? If Andy Clark is right, that I am the collection of tools under my control, then propagating my "self" means propagating those environmental resources I exploit as tools that aid in my coordinated persistence. Propagating myself means propagating my environments and communities, because those are parts of who I am. 

So technology, too, propagates itself through our activity, often without us knowing or realizing what we are doing. I don't think technology is out of our control, or that we're merely subject to the forces of nature. But we are natural systems participating in a densely connected community of complex systems, each of which has influence and puts constraints on the development of the other parts. Technology puts constraints on the development of these systems too, and not because we control it or intend it but simply because it is there as an object in the world. Whether we want it there or not.

Global climate change is a particularly striking example of how little foresight we have into the implications of our technological changes, and how little power we have to control these changes once they get going. It might give me hope to believe otherwise, but I don't find much practical benefit in false hopes. 

In any case, the point of the above is to describe how little either our use or our design of technology has to do with its intended purposes, either in use or design. My own argument (which I draw from Turing) is that there are alternative ways of understanding the role machines play in our systems that does not reduce their activity to mere tools of human use. Instead, I think some machines can be understood as genuine participants in shared social activities, and in this context we can treat them as autonomous agents contributing independently to shared social projects. 

I think Adam the robot scientist is a pretty clear example: http://www.aejournal.net/content/2/1/1

But more generally I think it's clear that we're psychologically disposed to treat animated machines as autonomous participants distinct from tools. (http://goo.gl/vAvICk)

From this perspective, it's a strange kind of hubris to think that we have control over our machines in any metaphysically meaningful way, or to demand some kind of "special" as-yet-undeveloped technology to arise before we take the contributions of our machines seriously. You don't need to do anything special to be a participant in a community, you just need to integrate with the support and contributions of everyone else. I think this is demonstrated in a non-scary, non-threatening way in the Tweenbot videos:

http://www.tweenbots.com/

I think I'm going to reshare these last few comments for a new post, since this has gotten interesting and might benefit from the input of others. 
1
1
Matthew McDonald's profile photo
Add a comment...
 
Where Is This Video?: http://youtu.be/VztSdwYPFCE
5
2
Russ Abbott's profile photoalev uneri's profile photoJamie Ramone's profile photo
 
The original is whichever version sells for the most money.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
A Block is the smallest area unit used by the U.S. Census Bureau for tabulating statistics. As of the 2010 census, the United States consists of 11,078,300 Census Blocks. Of them, 4,871,270 blocks totaling 4.61 million square kilometers were reported to have no population living inside them.
...
Commercial and industrial areas are also likely to be green on this map. The local shopping mall, an office park, a warehouse district or a factory may have their own Census Blocks. But if people don’t live there, they will be considered “uninhabited”. So it should be noted that just because a block is unoccupied, that does not mean it is undeveloped.
...
Northern Maine is conspicuously uninhabited. Despite being one of the earliest regions in North America to be settled by Europeans, the population there remains so low that large portions of the state’s interior have yet to be politically organized.

http://mapsbynik.tumblr.com/post/82791188950/nobody-lives-here-the-nearly-5-million-census
14
2
Alexander Zaytsev's profile photoValdis Kletnieks's profile photo
Add a comment...
 
> In this work we study, on a sample of 2.3 million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we categorize users according to their interaction patterns among the different topics and measure how a sample of this social ecosystem (1279 users) responded to the injection of 2788 false information posts. Our analysis reveals that users which are prominently interacting with alternative information sources (i.e. more exposed to unsubstantiated claims) are more prone to interact with false claims.

#attentioneconomy #digitalpolitics
3
Danial Hallock's profile photo
 
I only read your abstract (I'm so far behind on my white paper reading), but I find the conclusion interesting.

So mainstream media, even with their "did the airplane fly into a black hole!" still has fewer false claims than alternative methods?
Add a comment...
In his circles
1,507 people
Have him in circles
28,906 people
 
 
This study, by Gilens and Page, has been cited as proof that the US is an oligarchy, not a democracy. This is not precisely the case. 

Here's the actual conclusion: between the years of 1982 and 2002, policy models incorporating popular opinion have only slightly more predictive power than models incorporating only elite opinion. Every part of that conclusion is subtly different than "America is an oligarchy."

First, look at the time range. Politically, the years 1982 to 2002 are an anomaly: due to the permanent realignment of the Democratic South, the only years of undivided government were 1993 and 1994 -- and even during those years, the Democratic majority was constrained by its reliance on conservative Senators. During the entire period of the survey, it has been unusually difficult to get anything passed at all.

Second, look at the power actually ascribed to elites. The authors note an asymmetry between elites' ability to pass their agenda and their ability to block initiatives with mass support. Vetoes, unsurprisingly, are easier than ramrodding unpopular initiatives through Congress. But this is exactly what we ought to predict from the division of government! Considering the conditions that existed through the entire period, discussed here, the power to sway just a few officials becomes a de facto but not de jure, veto.

With that in mind, is it possible that the divided government was engineered by elites to block popular initiatives?

No. Not really. Elites differ from popular opinion on a number of economic issues, but their partisan distribution (and their opinion distribution on most issues) closely tracks that of the 40-60th quintile. They are slightly more likely to be Republican, and slightly more likely to be liberal or very liberal, but their opinion spread does not differ deeply from that of other Americans. 

The right conclusion from this survey (if there is a right conclusion at all -- check the scary-low model fit!) is not that we have drifted into oligarchy, but rather that (a) there is a strong status quo bias in the United States, (b) that the status quo benefits the powerful (and that those whom the status quo benefits will become the powerful), and that divided government can inadvertently turn influence into a veto.
3
iPan Baal's profile photo
 
The right conclusion from this survey (if there is a right conclusion at all -- check the scary-low model fit!) is not that we have drifted into oligarchy, but rather that (a) there is a strong status quo bias in the United States, (b) that the status quo benefits the powerful (and that those whom the status quo benefits will become the powerful)

You are more or less describing an Oligarchy.

http://en.wikipedia.org/wiki/Oligarchy

Oligarchy (from Greek ὀλιγαρχία (oligarkhía); from ὀλίγος (olígos), meaning "few", and ἄρχω (arkho), meaning "to rule or to command") is a form of power structure in which power effectively rests with a small number of people. These people could be distinguished by royalty, wealth, family ties, education, corporate, or military control. Such states are often controlled by a few prominent families who typically pass their influence from one generation to the next, but inheritance is not a necessary condition for the application of this term.

It seems to me like you're trying too hard to split hairs. Oligarchies control the whole world.
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
Slides and References for my recent lecture series on Social Networks.
3
1
Samar Agnihotri's profile photo
Add a comment...

Daniel Estrada

Shared publicly  - 
 
 
59% of Americans are optimistic that coming technological and scientific changes will make life in the future better, while 30% think these changes will lead to a future in which people are worse off than they are today. 81% expect that within the next 50 years people will have new organs custom grown in a lab. 51% expect that computers will be able to create art that is indistinguishable from that produced by humans. 39% expect that scientists will have developed the technology to teleport objects. 33% expect that humans will have colonized planets other than Earth. 19% expect that humans will be able to control the weather. 

66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring. 65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health. 63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most US airspace. 53% think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. Women are especially wary of a future in which these devices are widespread.

48% would like to ride in a driverless car, while 50% would not. 26% would like getting a brain implant to improve their memory or mental capacity, 72% would not. Just 20% would like to eat meat that was grown in a lab.
4
2
John Parker's profile photoBen Bogart's profile photoMatt Uebel's profile photoJustin Wentz's profile photo

Daniel Estrada

Shared publicly  - 
 
 
A CS question that I don't know the answer to

A conversation on another thread raised an interesting question about computers that I can't figure out the answer to: Is judging a Turing Test easier than, harder than, or equivalently hard to passing a Turing Test?

I figured I would throw this question out to the various computer scientists in the audience, since the answer isn't at all clear to me -- a Turing Test-passer doesn't seem to automatically be convertible into a Turing Test-judger or vice-versa -- and for the rest of you, I'll give some of the backstory of what this question means.

So, what's a Turing Test?

The Turing Test was a method proposed by Alan Turing (one of the founders of computer science) to determine if something had a human-equivalent intelligence or not. In this test, a judge tries to engage both a human and a computer in conversation. The human and computer are hidden from the judge, and the conversation is over some medium which doesn't make it obvious which is which -- say, IM -- and the judge's job is simple: to figure out which is which. Turing's idea was that to reliably pass such a test would be evidence that the computer is of human-equivalent intelligence.

Today in CS, we refer to problems which require human-equivalent intelligence to solve as "AI-complete" problems; so Turing hypothesized that this test is AI-complete, and for several decades it was considered the prototypical AI-complete problem, even the definition of AI-completeness. In recent years, this has been cast into doubt as chatbots have gotten better and better at fooling people, doing everything from customer service to cybersex. However, this doubt might be real and it might not: another long-standing principle of AI research is that, whenever computers start to get good at a task that was historically considered AI, people redefine AI to be "oh, well, not that, even a computer can do it."

The reason a Turing Test is complicated is that to carry on a conversation requires a surprisingly complex understanding of the world. For example, consider the "wug test," which human children can pass starting from an extremely early age. You make up a new word, "wug," and explain what it means, then have conversations about it. In one classic example, the experimenter shows the kids a whiteboard, and rubs a sponge which he calls a "wug" across it, which (thanks to some dye) marks the board purple. Human children will spontaneously talk about "wugging" the board; but they will never say that they are "wugging" the sponge. (It turns out that this has to do with how, when we put together sentence structures, the grammar we use depends a lot on which object is being changed by the action. This is why you can "pour water into a glass" and "fill a glass with water," but never "pour a glass with water" or "fill water into a glass.") 

It turns out that even resolving what pronouns refer to is AI-complete. Consider the following dialogue:

Woman: I'm leaving you.
Man: ... Who is he?

If you're a fluent English speaker, you probably had no difficulty understanding this dialogue. So tell me: who does "he" refer to in the second sentence? And what knowledge did you need in order to answer that?

(If you want to learn more about this kind of cognitive linguistics, I highly recommend Steven Pinker's The Stuff of Thought [http://www.amazon.com/The-Stuff-Thought-Language-Window/dp/0143114247] as a good layman's introduction.)

In Turing's proposal, the test was always administered by a human: the challenge, after all, was to see if a computer could be good enough to fool a human into accepting it as one as well. But given that we're getting computers which are doing a not-bad job at these tests, I'm starting to wonder: how good would a computer be at identifying other computers?

It might be easier than passing a Turing Test. It could be that a computer could do a reasonable job of driving "ordinary" conversation off the rails (that being a common way of finding weaknesses in a Turing-bot) and, once a conversation had gone far enough away from what the computer attempting to pass the test could handle, its failures would become so obvious that it would be easy to identify.

It might be harder than passing a Turing Test. It's possible that we could prove that any working Turing Test administrator could use that skill to also pass such a test -- but not every Turing Test-passing bot could be an administrator. Such a proof isn't obvious to me, but I wouldn't rule it out.

Or it might be equivalently hard: either equivalent in the practical sense, that both would require AI-completeness, or equivalent in the deeper mathematical sense, that if you had a Turing Test-passing bot you could use it to build a Turing Test-administering bot and vice-versa. 

If there is a difference between the two, then this might prove useful: for example, if it's easier to build a judge than a test passer, then Turing Tests could be the new CAPTCHA. (Which was +Chris Stehlik's original suggestion that sparked this whole conversation) 

And either way, this might tell us something deep about the nature of intelligence.
15
3
Nuno Ferreira's profile photoMatthew J Price's profile photoDaniel Estrada's profile photoJessica Montgomery's profile photo
3 comments
 
Turing's test is not so perfect as might commonly considered. From my point, the question to guess (recognize) who is who is an infinite function. So short enough conversation could not provide meaningful confidence of result. On the other side, well trained AI system, in some narrow area, tested vs. inexperienced human could provide false results in this test. I have heard about very similar competition in computer games area. Developers of winning AI system even came to the point, that in order to mimic human in full, some small amount of "incompetent" or "irrelevant" behaviour should be provided, because humans are not so perfect.
Add a comment...
 
> Alone, a single cell of Pseudonoma aeruginosa—the bacteria blamed for many hospital-acquired infections—can’t cause much damage to the human body. In fact, the bacteria won’t even produce virulence factors, the compounds that make it pathogenic to humans, if it doesn’t sense neighbors. But add a few thousand other cells of P. aeruginosa, and suddenly the bacteria aren’t lone warriors; they’re a team. When they sense the presence of unique signaling molecules produced by their allies, the cells start making those virulence factors, ramping up to cause an infection.

In a new PNAS paper, Bassler and her colleagues report the first ever molecule that stops P. aeruginosa from quorum sensing, that ability for cells to detect their neighbors and coordinate behavior as a group. By blocking quorum sensing, the researchers found, they can decrease the virulence of P. aeruginosa and its ability to form films of bacteria on surfaces, such as those inside the body.

http://firstlook.pnas.org/future-antibiotics-keep-bacteria-from-sensing-each-other/

Paper here: http://www.pnas.org/content/110/44/17981

#consensus   #organization  

via +kyle broom 
Alone, a single cell of Pseudonoma aeruginosa—the bacteria blamed for many hospital-acquired infections—can’t cause much damage to the human body. In fact, the bacteria won’t even produce virulence...
6
5
Matt Uebel's profile photoRicardo Rodrigues's profile photoRod Moore's profile photoDrew Sowersby's profile photo
2 comments
 
it works on humans right? Paradoxically the strategy of(/put to bad use by) supremacist 'grouuups' (growup fails)
Add a comment...
People
In his circles
1,507 people
Have him in circles
28,906 people
Work
Occupation
Internet
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Internet
Previously
Wildomar, CA - Riverside, CA - Urbana, IL - Normal, IL - New York, NY - Onjuku, Japan - Hong Kong, China
Story
Tagline
Robot. Made of smaller robots.
Introduction
I've written under the handle Eripsa for over a decade on various blogs and forums. Today I do my most of my blogging and research at Digital Interface and on my G+ stream.

I'm interested in issues at the intersection of the mind and technology. I write and post on topics ranging from AI and robotics to the politics of digital culture.

Specific posting interests are described in more detail here and here.

_____

It is somewhat unfashionable to talk about thinkers that inform your work, as opposed to issues. But philosophers tend to map out the problem space by reference to each position's strongest defenders. So I'm going to list a series of names, not just to cite their influence on my work, but really to triangulate on what the hell it is I think I'm doing. 

Turing, Quine, Heidegger, Dan Dennett, Andy Clark, Bruce Sterling, Bruno Latour, Larry Lessig, Clay Shirky, OWS, and Google. 

______


My avatar is the symbol for Digital Culture. You can think of it as a digital twist on Anarchism, but I prefer to think of it as the @ symbol all grown up. +Kyle Broom helped with the design.

Education
  • University of Illinois at Urbana-Champaign
    Philosophy, present
  • University of California, Riverside
    Computer Science and Philosophy, 2003
Basic Information
Gender
Male
Other names
eripsa
Daniel Estrada's +1's are the things they like, agree with, or want to recommend.
Santa Fe Institute
plus.google.com

Complexity research expanding the boundaries of science

Center Camp
plus.google.com

Center Camp hasn't shared anything on this page with you.

Augmata Hive
plus.google.com

experimenting with synthetic networks

Ars Technica
plus.google.com

Serving the technologist for over 1.3141592 x 10⁻¹ centuries

Burn, media, burn! Why we destroy comics, disco records, and TVs
feeds.arstechnica.com

Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c

American Museum of Natural History
plus.google.com

From dinosaurs to deep space: science news from the Museum

Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
feedproxy.google.com

Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks

Honeybees may have personality
feeds.arstechnica.com

Thrill-seeking isn't limited to humans, or even to vertebrates. Honeybees also show personality traits, with some loving adventure more than

DVICE: The Internet weighs as much as a largish strawberry
dvice.com

Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want

DVICE: Depression leads to different web surfing
dvice.com

While a lot of folks try to self-diagnose using the Internet (Web MD comes to mind), it turns out that the simple way someone uses the Inter

Greatest Speeches of the 20th Century
market.android.com

Shop Google Play on the web. Purchase and enjoy instantly on your Android phone or tablet without the hassle of syncing.

The Most Realistic Robotic Ass Ever Made
gizmodo.com

In the never-ending quest to bridge the uncanny valley, Japanese scientists have turned to one area of research that has, so far, gone ignor

Rejecting the Skeptic Identity
insecular.com

Do you identify yourself as a skeptic? Sarah Moglia, event specialist for the SSA and blogger at RantaSarah Rex prefers to describe herself

philosophy bites: Adina Roskies on Neuroscience and Free Will
philosophybites.com

Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that

Stanford Researchers Crack Captcha Code
feedproxy.google.com

A research team at Stanford University has introduced Decaptcha, a tool that decodes captchas.

Kickstarter Expects To Provide More Funding To The Arts Than NEA
idealab.talkingpointsmemo.com

NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O

How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
arstechnica.com

IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o

NYT: Google to sell Android-based heads-up display glasses this year
www.engadget.com

It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the

A Swarm of Nano Quadrotors
www.youtube.com

Experiments performed with a team of nano quadrotors at the GRASP Lab, University of Pennsylvania. Vehicles developed by KMel Robotics.

Anarchist symbolism - Wikipedia, the free encyclopedia
en.wikipedia.org

Part of the Politics series on. Anarchism. "Circle-A" anarchy symbol. Schools of thought. Buddhist · Capitalist · Christian · Coll