Shared publicly  - 
"The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future unfriendly AI can't reconstruct a copy of them to torture."
Borislav Iordanov's profile photogwern branwen's profile photoAlexander Kruel's profile photoDeen Abiola's profile photo
I'm not a reader/follower of LessWrong, just took a pick at this and it seems curiously amusing. A purportedly "rationalist" community brings what's essentially religious (christian and others) BS into their discussion:  the vengeful God, and inconsequential punishment.
+Borislav Iordanov Maybe the following quote, by a LW member who deleted his account, is clarifying the situation:

"I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane – instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism."
Muflax was pretty crazy long before deleting his account. Have you read his site? I love it to bits, but I don't take much of it seriously at all - no, not even his more recent Marcionite conspiracy theory stuff about the true Jesus and the writing of the New Testament. If you aren't going to believe his NT theology or his Chinese-inspired philosophy or his overviews of meditation, I don't see why you would take seriously his claims about what happens if you 'take seriously' the basic claims of rationality!
The funny thing is people who think Bayesian updates are the be all end all of rationality and consistency. When in fact there are other consistent non-Bayesian methods, Bayesian updates are in fact a special case of a more general principle and fails catastrophically in surprising amount of trivial cases.
+Alexander Kruel Yes, it does clarify a bit. Maybe this community member can direct his imagination towards some sort of narrative fiction, movies or whatever. I get the sense that this movement is kind of trying to hijack the idea of rationality a bit like organized religion claimed ownership of the idea of morality (in the sense of being the ultimate authority about what's moral). 
+gwern branwen I know that he is pretty crazy. But I wonder how exactly you would disagree with the quote? Of course it is not completely accurate but I believe that it captures very well some of the problems that I have with taking ideas too seriously and of cooking up giant frameworks of concepts that are sensible as stand-alone ideas but can be grouped together to justify lots of craziness. 
I would disagree on: his claim about what kind of universe it leads to; his rhetorical use of 'pantheon of gods' rather than 'group of powerful beings' (which is more obviously neutral and highlights that this worst-case possibility is a very common feature of many views of progress well outside transhumanist circles, since as technology gets more advanced we get more capabilities previously ascribed to supernatural beings, which obviously can then be abused), and I don't believe modal realism follows from any of the starting points (or is right at all, for that matter).

And now that I read it carefully, I'm not sure what he means by some of his claims: 'complexity theory'? What's that? If it's computational complexity, that is frequently discussed but I don't see how it's relevant, and if he means the old academic math area of complexity theory (think chaos theory) no one seems to discuss it so it can't be all that relevant.
I think he's formally studying Solomonoff Induction at some German university.. Too bad he deleted most of his blog posts. He mentioned it somewhere.

Anyway, I see 'pantheon of gods' as literary freedom that was appropriate for a post on his own blog talking about distant superintelligences and simulators. Consider that beings such as Pascalian muggers who use magic powers from outside the Matrix are easily more powerful than Jehovah ever was imagined to be. 

Regarding modal realism. There are various posts, e.g. or, and many more comments that consider all possible worlds are as real, or rather decision relevant, as the actual world. So what's wrong? 

Correct me if I am wrong, if you remove Solomonoff Induction and the expected utility hypothesis most of LessWrong comes crashing down. And if you don't then you end up not reading about certain ideas, e.g. Roko's basilisk, because someone claimed that doing so has a large expected disutility. In other words, Pascal's mugging, "do what I say or you'll earn a giant amount of negative utility". 
Today I learned that if you select with the mouse a sentence in a Google+ comment while working inside Gmail, Gmail will switch to an email composition mode and destroy 20 minutes of writing. -_-

To summarize my lost comment:

1. I don't think so. My archived copies say he was working on a popularized lecture for a class everyone had dropped out of, and that's it.
2. Literary freedom can become misleading out of context, as it is here.
3. Those posts are investigating an interesting idea, but I don't think many people think it's more than an interesting idea. It's in the 'wrong but we don't yet have a good disproof of it which makes it fun to discuss'
4. Think of all the claims commonly accepted on LW like optimal charity, biases, existential threats, anti-supernaturalism, strong AI thesis etc. How many depend on SI? Essentially none. (Yes, I know Eliezer claims that MWI is justified by SI but regular Occam's razor can be used there.) Expected utility, on the other hand, is important - but I don't expect to lose it so much as modify it in a Newtonian->Einsteinian sort of shift (expected utility seems to deliver bad results on a lottery? Point out that you can't satisfy the assumption of buying as many tickets as necessary, and that the Kelly criterion provably grows your wealth faster; that sort of thing.)
+Alexander Kruel Right, expected utility is sound enough but Solomonoff Induction is incomputable due to use of Kolmogorov Complexity (think about it as why Dependent Typing and first order logic is undecidable). Bayesian Inference is also intractable. This makes optimal rationality infeasible. A society which admits some irrationality then, even if not beneficial on the individual level benefits as a whole at valleys. 

+gwern branwen There is plenty on complex networks, non-linear dynamics, dynamical systems and self-organization. You can't call it useless just because you don't read much about it.
+Deen Abiola What do you mean by sound enough? It is unworkable in practice and only only partly useful given well-defined and limited circumstances.

Even worse, humans are unable to determine their utility-function. And even if they could, which is rather unlikely since we are not even able to tell to define "self", it is probably not stable and therefore time-inconsistent. 

And if you could somehow fix the above problems, then the long term detriments of our actions remain uncomputable because for any amount of computational resources there are still longer term consequences.
And besides, it leads to completely fucked up conclusions. Should I go buy ice cream if I don't have to? Well, let's see. If I account for the possibility that I might die on the way, taking into account the fun I might have living for billions of years in an inter-galactic civilization, then it clearly has negative expected utility to go out to buy ice cream. So uhm...
Deen, the ideal being intractable is not an argument against them being the ideal. (Wouldn't it be incredibly bizarre if the true best thing to do in all possible worlds was easy to calculate? Has life ever been so convenient?) Nor do I recall dismissing complexity theory (in the old non-CS sense) as useless, just that it's not used much on LW.
Should I dare criticizing Eliezer Yudkowsky? Well, let's see. If he is right I will ever so slightly reduce the chance of a positive Singularity and if he is wrong he will just waste a bit more money which would probably be wasted anyway.

So clearly any criticism is going to have a hugely negative expected utility.

Well...fuck that shit.
Theoretically sound I mean. The rest of my post agrees tht it is unworkable in practice. Plus, even a machine might find an appropriate function too complex to describe or too complex to compute.
+gwern branwen Oh. It seemed as if you were. I don't hold LW in mind unless explicitly mentioned. Complexity would also be in a parallel sense not old sense. In fact one usually prefaces complexity with computational when talking about asymptotic difficulty.  Also, Bayesian being my preference, I still acknowledge it is not the sole ideal when it comes to consistency and has issues if you don't pick priors carefully.

The ideal being intractable is an argument against acting as if it is a prescription to reality.
Add a comment...