// This line of thinking traces to Drefyus' What computers can't do, and specifically of his reading of Heidegger's care structure in Being and Time. Dreyfus' views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards "machine learning" over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. 


But Drefyus' critiques don't really have a purchase anymore, and I'm surprised to see Sterling dusting them off. It's hard to say that a driverless car doesn't "care" about the conditions on the road; literally all it's sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert.

I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system.

But this isn't a rule for all technology. Software in particular isn't subject to such design constraints. This is why we see neural nets making huge advances not just in vision and object recognition, but also in interpolation, natural language processing, and a host of other real AI puzzles that have gone unsolved for decades. We're living in a second golden age of AI, releasing charming bots of all shapes and sizes into the circus of social media. And in this zoo they are already passing for human (http://goo.gl/fSr1Qy), and having measurable influence on social trends and events. 

Twitter bots care about the same things we do. They flock to Bieber and Gaga, they have partisan allegiances in all the hot-button political debates, and they curate audience engagement with all the gusto of a teen taking a selfie. When these bots pass for human, it's because their memetic flocking is indistinguishable from our own. 

If these bots don't care, none of us do. 
Bruce Sterling in http://uxpamagazine.org/interview-with-bruce-sterling/: "Robots just don’t want to live. They’re inventions, not creatures; they don’t have any appetites or enthusiasms. I don’t think they’d maintain themselves very long without our relentlessly pushing them uphill against their own lifeless entropy. They’re just not entities in the same sense that we are entities; they don’t have much skin in our game. They don’t care and they can’t be bothered. We don’t yet understand how and why we ourselves care and bother, so we’d be hard put to install that capacity inside our robot vacuum cleaners."

Spot on. People who think intelligence will lead to motivation are confused. Genes have motivation without intelligence. The two things are almost orthogonal. Humans have very complex motivation.
Shared publiclyView activity