and I got about 25 trick-or-treaters, but we were provisioned for about 250. Oops.
I just read an essay which changed my mind about a few things, and as that happens rarely, I figure it must be pretty strong stuff, so I'll re-link it here: http://raikoth.net/libertarian.html
Particular points that changed my mind:
The example in 2.6. I'd never really thought about government intervention as a way of forcing people (in this case, the fish farmers) to internalize costs that had been externalized (in this case, the pollution). In general, I find this game-theoretic argument very persuasive.
The example in 2.7. I'd also never thought about the effectiveness of boycotts in game-theoretic terms, and it seems a bit embarrassingly obvious in hindsight.
The example in 2.8. It's interesting to think about how, because of the interconnectedness of our economy, there's really not any way to avoid systemic risk - you can't choose to only deal with people that aren't exposed to it, because literally everyone is.
Example 2.10 is something I'd been thinking about myself for a while, but not been able to formulate coherently; I'm glad the author did.
2.13.1. I did not know of that example. Interesting. Probably fixable with a more informed public, but to be honest I'm not sure people (at least if I use myself as a model for "people") have time to care about the carcinogenicity of each individual ingredient.
2.14.3: This didn't change my mind about anything, but I'd not heard of the concept of "semantic stopsigns" (ala <http://lesswrong.com/lw/it/semantic_stopsigns/
3.1: Huh! I've been doing this myself, and not even really realized it (i.e., conflating the definition of a word with the expected emotional attachment). I'll try to notice when I'm doing this in future.
3.2: This is totally fascinating, at least for me, and probably relentlessly boring for everyone else. My entire self
, in some sense, is made up of a set of inflexible rules; this is one of the defining things that makes me me. Contemplating not having them is pretty terrifying. The idea of this has been sort of lurking around the edge of my mind for a while, but I'm not quite sure how to even approach it.
3.4: Hm. I'm not sure how I'd reply to the noise-making-machine example without saying something like "the air is communal property", but... uhoh.
3.5: This explanation of consequentialism convinced me to discard my previous disdain for it in one fell swoop. I had previously used an example which I think of as the "dictator example": the idea that a dictator can force you to do any act, no matter how heinous, by threatening unlimited negative utility if you refuse. There's a flaw in this example, which I previously had not seen: the utility to everyone, for all time, of not having dictators is left out of the question! It seems that if you take a model that includes the utility to future people of not having dictators, nobody should cooperate with a dictator in this situation, and the moral hazard is avoided.
The example in 3.10 is very compelling. In general, I find the pattern of using motivating examples to address salient points to be useful. 3.10.1 discusses some criticisms of free-market-based solutions that I've had floating around in my head for a while, namely that they often assume: a) zero friction, b) infinite competition, c) zero latency, d) perfect knowledge, and e) rational actors.
: This isn't strictly
true. You can go and live in some of the vanishing forests and live off the land - but your air will still be polluted, and your water will still be poisoned, and you'll be undergoing enormous privations by avoiding modern health care. You can leave modern society alone, but it won't leave you alone.