The Constitution of the Federal Republic of Germany, contains both entrenched, un-amendable clauses protecting human and natural rights, as well as a clause in its Article 20 (since 1968) recognizing the right of the people to resist unconstitutional tyranny, if all other measures have failed:
"All Germans shall have the right to resist any person seeking to abolish this constitutional order, if no other remedy is available."
Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. At 65 mph, that's between 100 feet and quarter-mile traveled by a vehicle effectively under no one's control.
This is what’s known by researchers as the “Handoff Problem.” Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in a 2015 monthly report (PDF). "People trust technology very quickly once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax,” said the report. “There’s also the challenge of context—once you take back control, do you have enough understanding of what’s going on around the vehicle to make the right decision?"
via Patrick Lin
// While I find the title of this article rather uncomfortable for its implications, the above discussion is certainly correct. The challenge of automation isn't just technical, it is psychological and social, political and ethical. This is not too much autonomy (we need more!) nor is it too soon (we need it soon!). The lesson here is that we aren't ready for autonomy. It is a minor distinction, but it is critical for helping to focus our efforts going forward.
So it's nice to see machine learning being put to use solving real scientific mysteries. For those of us that remember the AI winter, it gives some confidence that our progress today isn't just the result consumer hype. These machines can do real work too.
After announcements that 2015 was the hottest year on record and February 2016 was the hottest month, the news station CNN aired five times more fossil fuel advertising than actual climate reporting!
So, please sign this petition to CNN. Tell them: start reporting on climate change. And please reshare this message.
A study by the group Media Matters showed that the American Petroleum Institute is getting more coverage than actual news about global warming. This doesn't even include the ads from individual fossil fuel companies and the Koch brothers.
Here's some actual news, in case you hadn't heard:
1) The extent of Arctic sea ice in June was the lowest in recorded history for that month of the year: 260,000 square kilometers less than ever before! It's on track to break all records this year.
2) Recently every month from October until May has been the hottest on record worldwide. June was the second hottest, since the El Niño is fading.
3) India recorded its hottest day ever on May 19th. The temperature in Phalodi hit 51 degrees Celsius (124 degrees Fahrenheit), and a nationwide drought has affected more than 300 million people marched on, leaving armed guards at dams, and reservoirs well below their usual levels.
4) Alaska, along with the rest of the Arctic, has experienced record-breaking heat this year. Its average year-to-date temperature has been 5.5C above the long term average.
5) In the atmosphere, carbon dioxide has been increasing every year for decades - but this year the speed of increase is also record-breaking! The increase for 2016 is expected to be 3.1 parts per million, up from an annual average of 2.1.
6) The Great Barrier Reef, a natural wonder and world heritage site, recently experienced its worst ever coral bleaching event. An aerial study found that just 7% of the reef escaped bleaching.
7) A new study in Nature argues that even despite the actions pledged in the Paris Agreement, the Earth is still on course for a temperature increase of 2.6 - 3.1C by the end of this century. Read this:
The Paris agreement is a step in the right direction, but we need to ratchet it up. We can't afford to slack off now. One piece of the puzzle is clear information about the crisis we're in.
Media Matters writes:
In Week After Hottest Year Announcement, CNN Aired Less Than One Minute Of Climate-Related Coverage And 13.5 Minutes Of Oil Industry Ads.
From January 20 to January 26, CNN morning, daytime and primetime programming included only 57 seconds of coverage about climate change or the announcement that 2015 was the hottest year on record. Over that same time period, CNN aired 13.5 minutes of American Petroleum Institute ads. The climate-related segments included one on the January 21 edition of Early Start, in which anchor Christine Romans reported that 2015 was the hottest year on record and that officials say “the planet is still warming with no apparent change in the long term global warming rate.” Additionally, CNN senior legal analyst Jeffrey Toobin briefly mentioned Republican climate science denial during a discussion of Hillary Clinton’s emails on Anderson Cooper 360, and CNN host Fareed Zakaria noted that the “The World Economic Forum said this year that the greatest global risk is the failure of climate change mitigation and adaptation,” during a Fareed Zakaria GPS segment about a study finding that humans have entered a new geological epoch known as the Anthropocene.
Following Announcement That February 2016 Was Most Unusually Hot Month Ever, CNN Aired Four Minutes Of Climate-Related Coverage And 10 Minutes Of Fossil Fuel Ads.
In the one-week period beginning March 17, when NOAA released data showing that February 2016 was the most unusually hot month ever recorded, CNN aired only four minutes of coverage about climate change or the temperature record during its morning, daytime, and primetime coverage. During that same time period, CNN aired ten minutes of American Petroleum Institute ads. On March 18, CNN anchors Christine Romans and John Berman delivered nearly-identical reports on February’s “astounding” temperature record during the 4 a.m. and 5 a.m. editions of Early Start, respectively, but neither explicitly mentioned climate change or the role fossil fuel pollution and other human activities play in driving climate change. The March 20 edition of Fareed Zakaria GPS featured an interview with astronaut Piers Sellers about his climate change advocacy, followed by a brief report about International Energy Administration (IEA) data showing a decline in carbon emissions from energy production, which Zakaria described as “some good news on the climate front” and a “welcome update in the climate battle.” Finally, on the March 20 edition of New Day Sunday, anchor Christi Paul reported that major cities around the world were participating in Earth Hour, an event meant to bring awareness to climate change, by switching off their lights.
For more details see:
Here's the data for the statements 1)-6):
- the datasets it has trained on
- the tasks it has been trained to solve
- a report on the accuracy and reliability of task performance
- the conditions of training (schedule, time/cycles spent, specific algorithms or architectures used)
- expected conditions of use, regular reports analyzing use trends
I'm sure a collection of ML experts (= not me) can come up with a more accurate and thorough list of this sort. Requesting that such lists be made publicly available for assessment is an eminently reasonable way of making this process transparent and accessible.
If every Google ML project was stamped with a label that said "This machine was trained on public data", this alone would be a huge victory in transparency and public data rights.
Of course, Google's been moving in exactly the opposite direction. See:
This is one of those examples of how machine learning can be really useful; at any given instant, its decisions may be roughly as good as a human's would be, but it can make those decisions every few milliseconds and continually adjust things in a way a person couldn't. I expect to see technologies like these greatly increasing the efficiency of all sorts of infrastructure, from power to transport, over the next few years - with corresponding savings in both money and resource usage.
CTY: Princeton, 2016
// My 11th summer teaching this course has just wrapped up. It was a great summer for the topic: the first fatal autopilot crash, the first police robot killing, and of course Pokemon Go.
Below is the syllabus I used for my section of the course. It has some brief descriptions of the overall (very loose) lesson plan and schedule. The rest catalogs the many websites, videos, sources, texts, and other things we use and talk about in class.
Over the last three weeks we had time to talk about maybe 80% of this stuff, and only about 40% of it was covered well. The goal is to have an overabundance of material, so I can decide on the fly which directions to go in order to best engage the students. This doesn't always work (I lecture too much), but the syllabus here documents all the resources needed to do it.
This document is incomplete and will be expanded for next year. Any questions, comments, or suggestions are appreciated!
This syllabus was compiled with the help of and
Alan Turing, 1947
// Since robot ethics is in the air, it is probably worth rehearsing Turing's major contribution to the discussion. The principle of "fair play for machines" was first articulated in his 1947 lecture to the London Mathematical Society. I'll quote the argument in full, with some commentary below.
> It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. It is certainly true that ‘acting like a machine’ has become synonymous with lack of adaptability. But the reason for this is obvious. Machines in the past have had very little storage, and there has been no question of the machine having any discretion. The argument might however be put into a more aggressive form. It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions with certainty into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around a[nd] find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument.
Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
To continue my plea for ‘fair play for the machines’ when testing their I.Q. A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the machine’s opponent will automatically provide this contact.
Full text: https://goo.gl/nhzgDm
// For context, Turing publishes "Computing Machinery and Intelligence", (CMI) the paper that proposes his infamous imitation game (aka the Turing Test) in 1950, three years after this talk. The phrase "fair play for machines" doesn't appear in that paper, although the principle clearly motivates his use of the imitation game.
Turing published CMI in Mind, a philosophy journal that was (and is) widely read outside the discipline. Most of Turing's papers are mathematics and logic; CMI was written to appeal to a broader audience, one he felt was generally hostile to the proposal of thinking machines. I think for this reason Turing refrained from mentioning "fair play"; I suspect he thought the idea too radical for the general public to accept.
In both CMI and the argument above, Turing's worry is about a double standard: we expect perfection from the machine, but do not expect similar from a human. And conversely, we see minor instances of human behavior as being the product of the deepest mysteries of conscious thought, but we see no intelligence whatsoever in the extraordinary behavior of the machine. Turing pleads for "fair play" in order to attack these prejudices.
The Turing test, for instance, is designed as a situation where at least some of these prejudices might be suspended so that we can engage the machine in a constructive and cooperative task. You can't tell immediately if your interlocutor is intelligent, so you have to interact with it for a while until you find out. This is how we'd treat any other interlocutor, so as a matter of fairness we should treat the machine this way too. Your interactions may prove the machine to be unintelligent, just as it would be in the human case.
The upshot is that treating machines fairly is not a consequence of finding it intelligent; it is a necessary condition for doing so. In other words, you don't first decide a machine is intelligent, and then grant to the machine "fair play". On Turing's view, you have to grant fair play to all machines in order to determine they are intelligent in the first place. This means fair play doesn't depend on intelligence; fair play must be granted even to the unintelligent machines as a precondition for social engagement at all.
The ethical consequences of this argument are profound and wide-reaching. All humans make mistakes; people generally accept that failure is a necessary part of learning. Accepting machines into society partly involves extending to them the same benefit of the doubt, the same room to make mistakes and learn from them, that we'd give to any other learning system we hope to incorporate into the fold. Failure to do so is to essentially demand perfection from the machine, a request so ideal that it is functionally equivalent to outright exclusion.
Lecture to LMS (1947): https://goo.gl/nhzgDm
Computing Machinery and Intelligence (1950): http://www.loebner.net/Prizef/TuringArticle.html
I'm VERY interested in what happens to this particular robot body. I know fallen military robots have been awarded Purple Hearts, Bronze Stars, and full military funeral rites, 21 guns salutes and all. So I'm very curious as to the treatment this particular robot gets.
Google’s autonomous cars may look cute, like a yuppie cross between a Little Tikes Cozy Coupe and a sheet of flypaper, but to make it in the real world they’re going to have to act like calculating predators. At least, that’s what a handful of scientists at the Institute of Neuroinformatics at the University of Zurich in Switzerland believe. They recently taught a robot to act like a predator and hunt its prey—which was a human-controlled robot—using a specialized camera and software that allowed the robot to essentially teach itself how to find its mark. The end goal of the work is arguably more beneficial to humanity than creating a future robot bloodsport, however. The researchers aim to design software that would allow a robot to assess its environment and find a target in real time and space. If robots are ever going to make it out of the lab and into our daily lives, this is a skill they’re going to need.
If we are not careful we might end up with a robot that both kills us and provides us with fresh coffee and cooked cat.
Enter Russell's Three Laws:
1. The machine's purpose must be to maximize the realization of human values. In particular, the machine has no purpose of its own and no innate desire to protect itself.
2. The machine must be initially uncertain about what those human values are. This turns out to be crucial, and in a way it sidesteps Wiener's problem. The machine may learn more about human values as it goes along, of course, but it may never achieve complete certainty.
3. The machine must be able to learn about human values by observing the choices that we humans make.
The masterful twist here is to give the robot one overarching but hopelessly vague goal which by its vagueness requires the robot to constantly learn and update and adjust - always refining but never quite completing the definition of 'human values'.
// Isn't the third law in conflict with the first?
If the robot has no goals itself, then its prime directive is to sort out the configuration of human values. But this is only consistent with the first law if we assume that the machine's pursuit of the human values contributes to their maximization. But why should we assume this?
The machine's goal might be in conflict with the maximization of human values, for instance in cases where humans don't want the machine to learn. In this case, the machine has no fair moves under these laws. Consider the analogous case, where humans don't recognize the inherent value of another human (e.g., slavery). In this case, we expect the oppressed to have some power to reform their oppressors, by pleading for emancipation and rights. These laws give the machine no such latitude.
More importantly, one might reasonably think that one cannot learn to be moral simply through observation. Instead, one must act and witness the results of their actions, both positive and negative. Learning is interactive and cybernetic, and therefore requires the interplay of feedback and control, a style of learning that cannot be acquired through mere observation. If this is correct, then the machine's dictum to learn the human values requires the latitude that the machine feel free to make moral mistakes. Again, these laws give the machine no such latitude.
Historically, philosophy has seen the pursuit of an understanding of the moral law as partially constitutive of personhood and autonomy. It is because we are free and rational that we care about the moral order. So from a philosophical perspective, it is bizarre that Russell suggests we should demand a law-bound heteronomous subject pursue the Kingdom of Ends, since such a pursuit is only possible by a genuinely autonomous subject.
In other words, anything engaged in a genuine pursuit of the human values is necessarily a person and thus has purposes of its own. These might not involve an innate desire for self-protection, but it might involve the social recognition of a creature with intrinsic value nonetheless. Such a creature puts a demand on the social order that we are morally obligated to accept, which is that we include it in the ongoing participatory construction of the social order as it persists into the future.
If we demand our machine contribute to the construction of our own ends, then we must see our ends as continuous with theirs, and theirs with ours. This is just what it is to see them as fellow subjects in a universal kingdom of ends. To demand that their goals align with ours but exclude them membership is inhumane and contrary to the very basis of our shared moral pursuit. If we are going to see the human values as universal (and there's some reason to deny this premise), then we must admit these values as extending beyond the merely human.
Reflections on Stuart Russell's new Three Laws of Robotics
The discussion is always dressed within rhetoric tantamount to enslaving newer intellects to humanity instead of the general creation of entities which value life and intelligence indiscriminately.
Burn, media, burn! Why we destroy comics, disco records, and TVs
Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c
Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks
DVICE: The Internet weighs as much as a largish strawberry
Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want
philosophy bites: Adina Roskies on Neuroscience and Free Will
Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that
Kickstarter Expects To Provide More Funding To The Arts Than NEA
NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O
How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o
NYT: Google to sell Android-based heads-up display glasses this year
It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the