[What sets the Machine Intelligence Research Institute apart from other people working on AI safety issues? One difference is that they're working on a different sort of problem than e.g. most academics, who focus more on short- and medium-term questions:]
> In MIRI’s Approach I spoke of two different classes of computer science problem. Class 1 problems involve figuring out how to do, in practice and with reasonable amounts of computing power, things which we know how to do in principle. Class 2 problems involve figuring out how to do in principle things that we can’t even do in principle yet.
> Our current approach to alignment research is to try to move problems from Class 2 to Class 1. This kind of research has been pursued successfully in other areas in the past, and in the context of AI alignment I believe that it deserves significantly more attention than it is receiving.
> Industry is traditionally best suited for the first problem class. Academia, too, also often focuses on the first class of problems instead of the second class — especially in the field of AI, for reasons related to point 1. It is common for academics to take some formalization of something like probability theory and then explore and extend the framework, figuring out where it applies and developing practical approximations of intractable algorithms and so on. It’s much rarer for academics to create theoretical foundations for problems that cannot yet be solved even in principle, and this tends to happen only when someone is searching for new theoretical foundations on purpose. For reasons discussed above, most academics aren’t attempting this sort of research yet when it comes to AI alignment.
> This is what MIRI brings to the table: a laser focus on the relevant technical challenges.
[If you want to support them, MIRI's summer fundraiser is still going on: https://intelligence.org/donate/