Definitely agree with Eric Horvitz's take in this Q&A on AI. Three highlights in particular:

"We’ve made little progress on some core capabilities that we take for granted in people. For example, researchers are still baffled about how babies and toddlers learn so much with such ease, simply by observing and engaging in the open world. And we don’t understand yet how to endow AI systems with the kind of commonsense reasoning that we take for granted and depend on in our daily lives"

"Machine learning and reasoning to help doctors to understand patient outcomes—in advance of poor outcomes. There’s a great deal of low-hanging fruit where even today’s AI technologies are well positioned to help. Sticking with healthcare for a bit, a recent study showed that nearly 1,000 people per day are dying in the US because of preventable errors being made in hospitals. I believe that AI technologies could be employed to provide new kinds of safety nets, via error detection, alerting, and decision support, that could save hundreds of thousands of lives per year"

"A couple of specific concerns ... AI technologies in military applications ... It’s not hard to envision how errors and misjudgments in AI systems—relied upon for fast-paced assessments and actions—might lead to new kinds of instabilities, and to imagine how undesired hostilities might be sparked. Beyond accidents, one can imagine how knowledge of systems can lead to deliberate attempts by parties to spoof systems on one or more sides to spark hostilities. In another area of challenge, I’ve been concerned about attempts to leverage AI in new ways to influence the beliefs and actions of people via the use of AI in new, powerful technologies aimed at persuasion. The concern is that people will harness AI methods to generate personalized sequences of information over time to shift people’s beliefs. Used on a wide scale, such systems could be used to influence voting and elections."
Shared publiclyView activity