I don't understand why pseudorandom number generators can do the work they do. I find it complete bizarre that we can reason fairly reliably about a completely deterministic algorithm as if it's random and use it to do things like compute π or render pretty pictures.

I find it even more bizarre that almost nobody else finds it bizarre.

I also have a little trouble with algorithms that use random numbers. But I accept as an empirical fact about the world that probability theory works and so computers, being in the world, can exploit this. In some ways it's less troublesome to me.

But I find it weird that the same reasoning works for deterministic systems too. Do pseudorandom algorithms work because there is some hard-to-see randomness buried in Monte Carlo algorithms? Not in the algorithm itself, obviously, but in the way that we, in the world, use them. I think the paper I link to below [1] argues this but I don't completely get it. (I think you can sidestep the quantum and multiverse stuff in that paper. Part of the argument could be applied to probability theory rather than quantum mechanics.)

If I hadn't written this sentence, I bet someone would ask "but if you accept randomised algorithms work, why would you have trouble with pseudorandomised algorithms, after all random and pseudorandom numbers are hard to distinguish?" But that's exactly my point.

[1]

http://arxiv.org/pdf/1212.0953v1.pdf