Today, we learned that we will have a new administration in Washington that promises a great deal of change. [...] As I saw this afternoon, students have wrapped the six great columns in Lobby 7 with huge sheets of paper. Three ask that you "Share Your Hopes," three to "Share Your Fears." They are covered with handwritten responses. People are lingering to read and add their own. Many say they fear for the future of the country, some for their personal safety, for their civil rights or that "my values no longer matter." Others fear that their peers will never take the time to understand why they voted for the winner. One hope struck me in particular: "I hope to understand the 48% of Americans who disagree with me." Nearly all the writers express some kind of pain. Yet together they have created a wonderful example of mutual respect and civil dialogue. Whatever may change in Washington, I believe there is great power in remembering that it will not change the values and the mission that unite us. [...] we do some of our best work when we turn outward to the world. Let's continue to do that now. And, following our students' lead, let us find ways to listen to one another – with sympathy, humility, decency, respect and kindness.
If a decision maker considers action a1, and rejects action a2, then, in this context, if outcome j happens, they not only experience the "choiceless" utility associated with state[a1,j] (i.e. the utility they'd experience if this outcome was forced upon them), they experience an emotion of regret or rejoicing that depends on how this compares to the choiceless utility they would have gotten from state[a2,j]. In one formulation of this, if delta is the difference in the two choiceless utilities, they'll additionally experience regret/rejoicing R(delta), for some function R, which is increasing.
The contextually appropriate utility, then, is the sum of the choiceless utility and R(delta). For making expected-value type decisions using this contextually modified utility, what ends up mattering is Q(delta)=delta + R(delta) - R(-delta), which measures the tradeoff, and is antisymmetric around 0.
Apparently you can explain many of e.g. Kahneman and Tversky's observations about behavior inconsistent with (normal) expected utility maximization, provided that Q is convex on the positive real line. The cost is that the notion is not generally transitive (in particular, need not lead to a total order on actions), because it's a contextual evaluation. Interesting stuff.
- Senior research scientist; tech lead; software engineer, 2012 - 2014
- Harvard Medical SchoolPostdoctoral Fellow, 2011 - 2012
- Massachusetts Institute of TechnologyResearch Assistant, 2005 - 2011
- Software Engineering Intern, 2007 - 2007
- Sorenson Molecular Genealogy FoundationDirector of Bioinformatics, 2004 - 2005
- Singularity University
- Harvard Beijing Academy
The unrecognised scientist behind the conquest of Mount Everest
Edmund Hillary got fame but the scientist who made it possible missed out.
Solar-Powered Camel Clinics Carry Medicine Across the Desert
Kenya's camels recently started sporting some unusual apparel: eco-friendly refrigerators! Some of the African country's camels are carrying
Tamlin Manor By Alicia J. Walker - On Kindle Audible and Paperback
A book on Kindle, Paperback and Audible
Space Shuttle Endeavour Exclusive: A Timelapse of the Final Ride | Light...
350,000 photos shot over the course of 6 days reveal an exclusive look at the Shuttle Endeavour's final journey through Los Angeles.
Scientists catch boa constrictor eating a howler monkey (photos)
In a world first, scientists have captured images and video of a boa constrictor attacking and devouring whole a femle howler monkey, one of