> +Daniel Estrada The reason I find it important to distinguish between human artifacts and life is the apparent ongoing re-framing as technology as "other".
Technology is increasingly (especially in the general public and by lay-people) seen as an independent and autonomous pseudo natural force, thanks to books like "What Technology Wants". I think this notion cuts against human agency in relation to our technological future; If we see technology as a natural force, then we have little impact on the ethics and shape of it. It accelerating and taking us over is an obvious extension of that mind-set.
Technology is not "other", it is an extension of us, its embedded in human culture and the result of human intention. To see it as anything else is abdicating responsibility for what it does to us and our social systems. (Did you see my TEDx talk?)
So under this position, I'm weary of any erosion of language that makes the distinction between life and technology.
// I respond:
> +Ben Bogart I'm very sympathetic to the idea that technology is an extension of human activity. I'm especially partial to Andy Clark's extended mind thesis (http://goo.gl/rcnnJx) and the relationship between our tools and our selves. So I agree, in general, that technology is an extension of human activity.
But I think it is vitally important that this is not understood as a limitation on technological autonomy. I think artifacts can be autonomous in the relevant sense while simultaneously being a tool of human activity. In fact, defending the possibility of machine autonomy in the face of anthropological considerations about technology is the focal subject of my dissertation (which I'll be defending in about 2 months time =)
The big-picture outline of my argument can be found here: http://goo.gl/KvXQqY
But for a tl;dr version of the argument, it's enough to consider that I am an autonomous agent, and I am also an extension of my parents and family, and of the broader community in which I developed. I was, in a very important sense, "created" and "designed" by these communities, and there's an important sense in which my behavior reflects on these communities. So they are justified, for instance, in being proud of my successes and disappointed in my failures and otherwise taking some degree of responsibility in evaluating the things I do.
At the same time, however, I'm an autonomous agent, responsible for my own successes and failures, and capable of acting in the world as an independent and self-constituting being. My deep causal and ethical ties to my communities of origin don't determine my behavior and don't deprive me of autonomy. I am simultaneously an extension of these communities and an autonomous agent, and reconciling these apparently conflicting aspects of my identity is, more or less, what self-consciousness is built to do.
For the very same reason, the machines we create and deploy into the world are capable of autonomous activity and self-constituting action, and therefore might deserve serious treatment as autonomous agents, all despite the fact that their creation and design is also a product of human activity. This is not a contradiction in terms, as you seem to think it is. This is part of the condition of organisms embedded in complex ecological and social environment. And it is a feature that applies to organisms of all sorts, including the nonbiological ones.
I think this is a pretty deep and important disagreement: you're insisting that we view technological change as nonnatural and under our control, and I agreeing with Rathenau: “Mechanization is not the result of free, conscious deliberation, expressing mankind’s ethical will. Rather it grew without being intended, or indeed even noticed. In spite of its rational and casuistic structure, it is a dumb process of nature, not one originating from choice.” http://goo.gl/x5yKu5
This is one of my favorite issues in the philosophy of technology, and one that I've done a lot of research on. So I'm happy to keep engaging someone who knows the issue well and falls on the other side =)
+Daniel Estrada Good luck on the dissertation, perhaps I will wait to respond seriously when its finished. :) I, myself, am aiming to defend at the end of the summer.
1. I agree that human agency is not "free" (ie is constrained), but all (living) agency is constrained by the environment.
2. I agree that autonomous technologies could exist, I just think they would be not be composed of electromechanical parts.
3. I think there is a problem with the notion of autonomy in a technology because a technology is developed to serve particular human utility. There is little value (beyond the philosophical enquiry of autonomy) in a technology whose autonomy does anything beyond satisfying that utility.
The utility of life is to propagate itself. There is no over-arching context of intention that specifies the utility of life. That does not mean that life is not exploited for the utility of another organism, but the difference is in the process of specification and design vs search and exploitation. I'll end with one thought:
What approach (in the following simplistic dichotomy) is better to encourage individual and social responsibility in making the world a better place? (a) Emphasize our power as citizens in technological development through technical literacy, buying habits, usage, and political processes to intentionally push technologies in the directions that is best for us, or (b) emphasize the autonomy and power of technology and technologists to solve our problems so we don't have to make the hard choices and be responsible for the effects of our consumption and expansionist oriented behaviour?
For me, the answer is a strong (a).
// My most recent response:
> +Ben Bogart There's a lot of interesting things going on here that are worth picking up and running with. I want to emphasize again how sympathetic I am to the pragmatism of your approach.
To keep things focused, though, I'm going to take issue with this claim in particular:
> because a technology is developed to serve particular human utility
I think this is ambiguous between a few different claims, not all of which are true. I think this ambiguity is a central to much of the confusion over technology, so I want to be careful treating the claim.
We can first talk about the utility anticipated in the design of an object, and the utility actually realized in specific instances of use, and moreover that the two might diverge arbitrarily. I can use objects in ways utterly foreign to their intended functions, and I can use objects that I find laying around that are entirely natural and without any "intended" functions at all. Moreover, I can design objects to be used in ways that they never are (the chair in a design museum that has never been used for sitting, for instance).
History is riddled with instances of technologies whose surprising growth was unexpected even by its designers ("I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943). I think more interestingly, cognitive science is full of studies that show the spontaneous activity of discovering new tools in the environment through play. (see: http://goo.gl/YP4ln9)
This research seems to undermine the apparently clear thesis driving your views: "The utility of life is to propagate itself." The claim isn't so much wrong as ambiguous: what "self" is being propagated? If Andy Clark is right, that I am the collection of tools under my control, then propagating my "self" means propagating those environmental resources I exploit as tools that aid in my coordinated persistence. Propagating myself means propagating my environments and communities, because those are parts of who I am.
So technology, too, propagates itself through our activity, often without us knowing or realizing what we are doing. I don't think technology is out of our control, or that we're merely subject to the forces of nature. But we are natural systems participating in a densely connected community of complex systems, each of which has influence and puts constraints on the development of the other parts. Technology puts constraints on the development of these systems too, and not because we control it or intend it but simply because it is there as an object in the world. Whether we want it there or not.
Global climate change is a particularly striking example of how little foresight we have into the implications of our technological changes, and how little power we have to control these changes once they get going. It might give me hope to believe otherwise, but I don't find much practical benefit in false hopes.
In any case, the point of the above is to describe how little either our use or our design of technology has to do with its intended purposes, either in use or design. My own argument (which I draw from Turing) is that there are alternative ways of understanding the role machines play in our systems that does not reduce their activity to mere tools of human use. Instead, I think some machines can be understood as genuine participants in shared social activities, and in this context we can treat them as autonomous agents contributing independently to shared social projects.
I think Adam the robot scientist is a pretty clear example: http://www.aejournal.net/content/2/1/1
But more generally I think it's clear that we're psychologically disposed to treat animated machines as autonomous participants distinct from tools. (http://goo.gl/vAvICk)
From this perspective, it's a strange kind of hubris to think that we have control over our machines in any metaphysically meaningful way, or to demand some kind of "special" as-yet-undeveloped technology to arise before we take the contributions of our machines seriously. You don't need to do anything special to be a participant in a community, you just need to integrate with the support and contributions of everyone else. I think this is demonstrated in a non-scary, non-threatening way in the Tweenbot videos:
I think I'm going to reshare these last few comments for a new post, since this has gotten interesting and might benefit from the input of others.
Commercial and industrial areas are also likely to be green on this map. The local shopping mall, an office park, a warehouse district or a factory may have their own Census Blocks. But if people don’t live there, they will be considered “uninhabited”. So it should be noted that just because a block is unoccupied, that does not mean it is undeveloped.
Northern Maine is conspicuously uninhabited. Despite being one of the earliest regions in North America to be settled by Europeans, the population there remains so low that large portions of the state’s interior have yet to be politically organized.
So mainstream media, even with their "did the airplane fly into a black hole!" still has fewer false claims than alternative methods?
Here's the actual conclusion: between the years of 1982 and 2002, policy models incorporating popular opinion have only slightly more predictive power than models incorporating only elite opinion. Every part of that conclusion is subtly different than "America is an oligarchy."
First, look at the time range. Politically, the years 1982 to 2002 are an anomaly: due to the permanent realignment of the Democratic South, the only years of undivided government were 1993 and 1994 -- and even during those years, the Democratic majority was constrained by its reliance on conservative Senators. During the entire period of the survey, it has been unusually difficult to get anything passed at all.
Second, look at the power actually ascribed to elites. The authors note an asymmetry between elites' ability to pass their agenda and their ability to block initiatives with mass support. Vetoes, unsurprisingly, are easier than ramrodding unpopular initiatives through Congress. But this is exactly what we ought to predict from the division of government! Considering the conditions that existed through the entire period, discussed here, the power to sway just a few officials becomes a de facto but not de jure, veto.
With that in mind, is it possible that the divided government was engineered by elites to block popular initiatives?
No. Not really. Elites differ from popular opinion on a number of economic issues, but their partisan distribution (and their opinion distribution on most issues) closely tracks that of the 40-60th quintile. They are slightly more likely to be Republican, and slightly more likely to be liberal or very liberal, but their opinion spread does not differ deeply from that of other Americans.
The right conclusion from this survey (if there is a right conclusion at all -- check the scary-low model fit!) is not that we have drifted into oligarchy, but rather that (a) there is a strong status quo bias in the United States, (b) that the status quo benefits the powerful (and that those whom the status quo benefits will become the powerful), and that divided government can inadvertently turn influence into a veto.
You are more or less describing an Oligarchy.
Oligarchy (from Greek ὀλιγαρχία (oligarkhía); from ὀλίγος (olígos), meaning "few", and ἄρχω (arkho), meaning "to rule or to command") is a form of power structure in which power effectively rests with a small number of people. These people could be distinguished by royalty, wealth, family ties, education, corporate, or military control. Such states are often controlled by a few prominent families who typically pass their influence from one generation to the next, but inheritance is not a necessary condition for the application of this term.
It seems to me like you're trying too hard to split hairs. Oligarchies control the whole world.
66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring. 65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health. 63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most US airspace. 53% think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. Women are especially wary of a future in which these devices are widespread.
48% would like to ride in a driverless car, while 50% would not. 26% would like getting a brain implant to improve their memory or mental capacity, 72% would not. Just 20% would like to eat meat that was grown in a lab.
A conversation on another thread raised an interesting question about computers that I can't figure out the answer to: Is judging a Turing Test easier than, harder than, or equivalently hard to passing a Turing Test?
I figured I would throw this question out to the various computer scientists in the audience, since the answer isn't at all clear to me -- a Turing Test-passer doesn't seem to automatically be convertible into a Turing Test-judger or vice-versa -- and for the rest of you, I'll give some of the backstory of what this question means.
So, what's a Turing Test?
The Turing Test was a method proposed by Alan Turing (one of the founders of computer science) to determine if something had a human-equivalent intelligence or not. In this test, a judge tries to engage both a human and a computer in conversation. The human and computer are hidden from the judge, and the conversation is over some medium which doesn't make it obvious which is which -- say, IM -- and the judge's job is simple: to figure out which is which. Turing's idea was that to reliably pass such a test would be evidence that the computer is of human-equivalent intelligence.
Today in CS, we refer to problems which require human-equivalent intelligence to solve as "AI-complete" problems; so Turing hypothesized that this test is AI-complete, and for several decades it was considered the prototypical AI-complete problem, even the definition of AI-completeness. In recent years, this has been cast into doubt as chatbots have gotten better and better at fooling people, doing everything from customer service to cybersex. However, this doubt might be real and it might not: another long-standing principle of AI research is that, whenever computers start to get good at a task that was historically considered AI, people redefine AI to be "oh, well, not that, even a computer can do it."
The reason a Turing Test is complicated is that to carry on a conversation requires a surprisingly complex understanding of the world. For example, consider the "wug test," which human children can pass starting from an extremely early age. You make up a new word, "wug," and explain what it means, then have conversations about it. In one classic example, the experimenter shows the kids a whiteboard, and rubs a sponge which he calls a "wug" across it, which (thanks to some dye) marks the board purple. Human children will spontaneously talk about "wugging" the board; but they will never say that they are "wugging" the sponge. (It turns out that this has to do with how, when we put together sentence structures, the grammar we use depends a lot on which object is being changed by the action. This is why you can "pour water into a glass" and "fill a glass with water," but never "pour a glass with water" or "fill water into a glass.")
It turns out that even resolving what pronouns refer to is AI-complete. Consider the following dialogue:
Woman: I'm leaving you.
Man: ... Who is he?
If you're a fluent English speaker, you probably had no difficulty understanding this dialogue. So tell me: who does "he" refer to in the second sentence? And what knowledge did you need in order to answer that?
(If you want to learn more about this kind of cognitive linguistics, I highly recommend Steven Pinker's The Stuff of Thought [http://www.amazon.com/The-Stuff-Thought-Language-Window/dp/0143114247] as a good layman's introduction.)
In Turing's proposal, the test was always administered by a human: the challenge, after all, was to see if a computer could be good enough to fool a human into accepting it as one as well. But given that we're getting computers which are doing a not-bad job at these tests, I'm starting to wonder: how good would a computer be at identifying other computers?
It might be easier than passing a Turing Test. It could be that a computer could do a reasonable job of driving "ordinary" conversation off the rails (that being a common way of finding weaknesses in a Turing-bot) and, once a conversation had gone far enough away from what the computer attempting to pass the test could handle, its failures would become so obvious that it would be easy to identify.
It might be harder than passing a Turing Test. It's possible that we could prove that any working Turing Test administrator could use that skill to also pass such a test -- but not every Turing Test-passing bot could be an administrator. Such a proof isn't obvious to me, but I wouldn't rule it out.
Or it might be equivalently hard: either equivalent in the practical sense, that both would require AI-completeness, or equivalent in the deeper mathematical sense, that if you had a Turing Test-passing bot you could use it to build a Turing Test-administering bot and vice-versa.
If there is a difference between the two, then this might prove useful: for example, if it's easier to build a judge than a test passer, then Turing Tests could be the new CAPTCHA. (Which was 's original suggestion that sparked this whole conversation)
And either way, this might tell us something deep about the nature of intelligence.
In a new PNAS paper, Bassler and her colleagues report the first ever molecule that stops P. aeruginosa from quorum sensing, that ability for cells to detect their neighbors and coordinate behavior as a group. By blocking quorum sensing, the researchers found, they can decrease the virulence of P. aeruginosa and its ability to form films of bacteria on surfaces, such as those inside the body.
Paper here: http://www.pnas.org/content/110/44/17981
- University of Illinois at Urbana-ChampaignPhilosophy, present
- University of California, RiversideComputer Science and Philosophy, 2003
Burn, media, burn! Why we destroy comics, disco records, and TVs
Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c
Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks
DVICE: The Internet weighs as much as a largish strawberry
Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want
philosophy bites: Adina Roskies on Neuroscience and Free Will
Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that
Kickstarter Expects To Provide More Funding To The Arts Than NEA
NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O
How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o
NYT: Google to sell Android-based heads-up display glasses this year
It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the
Anarchist symbolism - Wikipedia, the free encyclopedia
Part of the Politics series on. Anarchism. "Circle-A" anarchy symbol. Schools of thought. Buddhist · Capitalist · Christian · Coll