"The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future unfriendly AI can't reconstruct a copy of them to torture."
one plus one
Shared publicly•View activity
View 14 previous comments
- What do you mean by sound enough? It is unworkable in practice and only only partly useful given well-defined and limited circumstances.
Even worse, humans are unable to determine their utility-function. And even if they could, which is rather unlikely since we are not even able to tell to define "self", it is probably not stable and therefore time-inconsistent.
And if you could somehow fix the above problems, then the long term detriments of our actions remain uncomputable because for any amount of computational resources there are still longer term consequences.Feb 24, 2013
- And besides, it leads to completely fucked up conclusions. Should I go buy ice cream if I don't have to? Well, let's see. If I account for the possibility that I might die on the way, taking into account the fun I might have living for billions of years in an inter-galactic civilization, then it clearly has negative expected utility to go out to buy ice cream. So uhm...Feb 24, 2013
- Deen, the ideal being intractable is not an argument against them being the ideal. (Wouldn't it be incredibly bizarre if the true best thing to do in all possible worlds was easy to calculate? Has life ever been so convenient?) Nor do I recall dismissing complexity theory (in the old non-CS sense) as useless, just that it's not used much on LW.Feb 24, 2013
- Should I dare criticizing Eliezer Yudkowsky? Well, let's see. If he is right I will ever so slightly reduce the chance of a positive Singularity and if he is wrong he will just waste a bit more money which would probably be wasted anyway.
So clearly any criticism is going to have a hugely negative expected utility.
Well...fuck that shit.Feb 24, 2013
- Theoretically sound I mean. The rest of my post agrees tht it is unworkable in practice. Plus, even a machine might find an appropriate function too complex to describe or too complex to compute.Feb 24, 2013
- Oh. It seemed as if you were. I don't hold LW in mind unless explicitly mentioned. Complexity would also be in a parallel sense not old sense. In fact one usually prefaces complexity with computational when talking about asymptotic difficulty. Also, Bayesian being my preference, I still acknowledge it is not the sole ideal when it comes to consistency and has issues if you don't pick priors carefully.
The ideal being intractable is an argument against acting as if it is a prescription to reality.Feb 24, 2013