Profile

Cover photo
Matthias Blume
Works at Google
Attended Princeton University
Lives in Chicago, IL
738 followers|486,674 views
AboutPostsPhotos

Stream

Matthias Blume

Shared publicly  - 
 
Europe, quo vadis?
 
In a disappointing decision yesterday, the European Court of Human Rights has ruled that an online platform can be held liable for the content of user-submitted comments, even before a complaint has been made. The ramifications for European users' freedom of expression online are serious.
The future for online discussion platforms in Europe is looking cloudy following yesterday's ruling of the European Court of Human Rights in the case of Delfi AS v. Estonia. In a disappointing decision, the court affirmed that Estonian courts were entitled to hold an online news portal liable in defamation for comments submitted anonymously by readers.
48 comments on original post
2
Add a comment...

Matthias Blume

Shared publicly  - 
 
Belated discovery, but still amusing:

http://web.volkswagen.co.jp/april1/2015/
 ·  Translate
2015年4月1日フォルクスワーゲンから、新たな“バス”ライフ向けたニューモデルが新登場?|フォルクスワーゲン2015年エイプリルフールサイト
2
1
direkt aus Japan's profile photoVito Johnson's profile photo
 
Das war ein super Marketing Gag von VW.
 ·  Translate
Add a comment...

Matthias Blume

Shared publicly  - 
1
Jan-Willem Maessen's profile photo
 
If we had their snow-clearing equipment things would have gone much better in Boston this winter.

Heck, we could've used whatever they have to keep the Donner Pass clear.  Lord knows they didn't need it this year.
Add a comment...

Matthias Blume

Shared publicly  - 
 
 
Congratulations to Tokyo on being named the world's safest city. Also kudos to Osaka for placing third on the list! http://bit.ly/16mmZgB
The world’s most populated city is also the safest according to study.
14 comments on original post
1
Erik de Castro Lopo's profile photoBenoit Hudson's profile photo
2 comments
 
Toronto better city to live in than Montreal?  Bah, humbug.
Add a comment...

Matthias Blume

Shared publicly  - 
 
6
1
Oleksandr Nikitin's profile photo
Add a comment...
Have him in circles
738 people
Edward Kmett's profile photo
George Giorgidze's profile photo
Bernd Rubel's profile photo
Greg Frascadore's profile photo
Benjamin Pierce's profile photo
Sila Fernandus's profile photo
Yuika Natsume's profile photo
chin yit tay's profile photo
Panya Yangsri's profile photo

Matthias Blume

Shared publicly  - 
 
 
Ever since seeing this article a few days ago, it's been bugging me. We know that self-driving cars will have to solve real-life "trolley problems:" those favorite hypotheticals of Philosophy 101 classes wherein you have to make a choice between saving, say, one person's life or five, or saving five people's lives by pushing another person off a bridge, or things like that. And ethicists (and even more so, the media) have spent a lot of time talking about how impossible it will be to ever trust computers with such decisions, and why, therefore, autonomous machines are frightening.

What bugs me about this is that we make these kinds of decisions all the time. There are plenty of concrete, real-world cases that actually happen: do you swerve into a tree rather than hit a pedestrian? (That's greatly increasing the risk to your life -- and your passengers' -- to save another person)

I think that part of the reason that we're so nervous about computerizing these ethical decisions is not so much that they're hard, as that doing this would require us to be very explicit about how we want these decisions made -- and people tend to talk around that very explicit decision, because when they do, it tends to reveal that their actual preferences aren't the same as the ones they want their neighbors to think they have.

For example: I suspect that most people, if driving alone in a vehicle, will go to fairly significant lengths to avoid hitting a pedestrian, including putting themselves at risk by hitting a tree or running into a ditch. I suspect that if the pedestrian is pushing a stroller with a baby, they'll feel even more strongly this way. But as soon as you have passengers in the car, things change: what if it's your spouse? Your children? What if you don't particularly like your spouse?

Or we can phrase it in the way that the headline below does: "Will your self-driving car be programmed to kill you if it means saving more strangers?" This phrasing is deliberately chosen to trigger a revulsion, and if I phrase it instead the way I did above -- in terms of running into a tree to avoid a pedestrian -- your answer might be different. The phrasing in the headline, on the other hand, seems to tap into a fear of loss of autonomy, which I often hear around other parts of discussions of the future of cars. Here's a place where a decision which you normally make -- based on secret factors which only you, in your heart, know, and which nobody else will ever know for sure -- is instead going to be made by someone else, and not necessarily to your advantage. We all suspect that it would sometimes make that decision in a way that, if we were making it secret (and with the plausible deniability that comes from it being hard to operate a car during an emergency), we might make quite differently.

Oddly, if you think about how we would feel about such decisions being made by a human taxi driver, people's reactions seem different, even though there's the same loss of autonomy, and now instead of a rule you can understand, you're subject to the driver's secret decisions. 

I suspect that the truth is this:

Most people would go to more lengths than they expect to save a life that they in some way cared about.

Most people would go to more lengths than they are willing to admit to save their own life: their actual balance, in the clinch, between protecting themselves and protecting others isn't the one they say it is. And most people secretly suspect that this is true, which is why the notion of the car "being programmed to kill you" in order to save other people's lives -- taking away that last chance to change your mind -- is frightening.

Most people's calculus about the lives in question is actually fairly complex, and may vary from day to day. But people's immediate conscious thoughts -- who they're happy with, who they're mad at -- may not accurately reflect what they would end up doing.

And so what's frightening about this isn't that the decision would be made by a third party, but that even if we ourselves individually made the decision, setting the knobs and dials of our car's Ethics-O-Meter every morning, we would be forcing ourselves to explicitly state what we really wanted to happen, and commit ourselves, staking our own lives and those of others on it. The opportunity to have a private calculus of life and death would go away.

As a side note, for cars this is less actually relevant, because there are actually very few cases in which you would have to choose between hitting a pedestrian and crashing into a tree which didn't come from driver inattention or other unsafe driving behaviors leading to loss of vehicle control -- precisely the sorts of things which self-driving cars don't have. So these mortal cases would be vanishingly rarer than they are in our daily lives, which is precisely where the advantage of self-driving cars comes from.

For robotic weapons such as armed drones, of course, these questions happen all the time. But in that case, we have a simple ethical answer as well: if you program a drone to kill everyone matching a certain pattern in a certain area, and it does so, then the moral fault lies with the person who launched it; the device may be more complex (and trigger our subconscious identification of it as being a "sort-of animate entity," as our minds tend to do), but ultimately it's no more a moral or ethical decision agent than a spear that we've thrown at someone, once it's left our hand and is on its mortal flight.

With the cars, the choice of the programming of ethics is the point at which these decisions are made. This programming may be erroneous, or it may fail in circumstances beyond those which were originally foreseen (and what planning for life and death doesn't?), but ultimately, ethical programming is just like any other kind of programming: you tell it you want X, and it will deliver X for you. If X was not what you really wanted, that's because you were dishonest with the computer.

The real challenge is this: if we agree on a standard ethical programming for cars, we have to agree and deal with the fact that we don't all want the same thing. If we each program our own car's ethical bounds, then we each have that individual responsibility. And in either case, these cars give us the practical requirement to be completely explicit and precise about what we do, and don't, want to happen when faced with a real-life trolley problem.
The computer brains inside autonomous vehicles will be fast enough to make life-or-death decisions. But should they? A bioethicist weighs in on a thorny problem of the dawning robot age.
166 comments on original post
2
Add a comment...

Matthias Blume

Shared publicly  - 
 
After all the unbelievable highs and lows in his life, what an unexpected, sad, and unnecessary way to go...
http://www.nytimes.com/2015/05/25/science/john-f-nash-jr-mathematician-whose-life-story-inspired-a-beautiful-mind-dies-at-86.html?_r=0
The narrative of Mr. Nash’s brilliant rise, the lost years of severe mental illness, and the eventual awarding of a Nobel Prize captured the public mind.
5
Add a comment...

Matthias Blume

Shared publicly  - 
1
Add a comment...

Matthias Blume

Shared publicly  - 
 
 
"In a revelation that defies all reason, the New York attorney general’s office found that four out of five [1] of the most popular herbal supplements sold at those major retailers contained precisely zero of the ingredients listed on their labels. Instead, the products, which are typically used as alternative medicine treatments, were found to contain cheap fillers such as rice, a common house plant called dracaena and (weirdly) asparagus."

+Vocativ 

Source: http://www.vocativ.com/culture/health-culture/supplements/
[1] http://www.nytimes.com/interactive/2015/02/02/health/herbal_supplement_letters.html?_r=0
#health   
19 comments on original post
9
4
Daniel Yokomizo's profile photoZac Slade's profile photoRichie Primus's profile photoNick Kidd's profile photo
 
It's homeopathic supplements !
Add a comment...

Matthias Blume

Shared publicly  - 
 
Ich hatte noch einen Koffer in Berlin...
 ·  Translate
1
Alexander Reed's profile photo
 
Dieses Foto verwirrt mich. Ich entdecke 6 Zwillingspaare, und 4 Vierlinge. Das kann doch kein Zufall sein. Was war denn da los?
(Annegret gibt's nur einmal - die durfte sich wohl nicht bewegen?)
 ·  Translate
Add a comment...

Matthias Blume

Shared publicly  - 
 
Hey!  Fellow lemmings!  Did you know that there is a Moral Case for walking towards that cliff?  Because ... there really is no cliff.  You know...
1
Add a comment...
People
Have him in circles
738 people
Edward Kmett's profile photo
George Giorgidze's profile photo
Bernd Rubel's profile photo
Greg Frascadore's profile photo
Benjamin Pierce's profile photo
Sila Fernandus's profile photo
Yuika Natsume's profile photo
chin yit tay's profile photo
Panya Yangsri's profile photo
Education
  • Princeton University
    Computer Science, 1992 - 1997
  • Humboldt University of Berlin
    Mathematics/CS, 1986 - 1990
Basic Information
Gender
Male
Work
Occupation
Software Engineer at Google
Employment
  • Google
    Software Engineer, 2009 - present
  • Toyota Technological Institute at Chicago
    Assistant Professor, 2003 - 2009
  • Bell Labs
    Member of Technical Staff, 2001 - 2003
  • Kyoto University, RIMS
    Special Foreign Researcher, 1998 - 2000
  • Princeton University
    Post-doctoral Researcher and Lecturer, 1997 - 1998
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Chicago, IL
Previously
Bridgewater, NJ - Kyoto, Japan - Princeton, NJ - Berlin, Germany - Chemnitz, Germany - Dresden, Germany - Tokyo, Japan
Links
Other profiles