My former employer, the Machine Intelligence Research Institute, is running a new summer fundraiser. Rather than having a fixed target as they usually have, this time around they're doing a "stretch goal approach", seeking as much money as they can. They've already hit their first, $250,000 goal, and are looking to hopefully hit their $500,00 goal next.
Here are some of my reasons why donating to them would be a good idea:
* The recent "Research Priorities for Robust and Beneficial Artificial Intelligence" open letter, signed by many of the world's top AI experts in both industry and academia, stated:
> Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. [...] We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.
There is now increasing attention on the topic of possible risks from AI. And as MIRI's Nate Soares states in his article "Why Now Matters", this makes it an exceptionally valuable time to donate to MIRI:
> There’s an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community’s response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field’s future direction.
> People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are less well-understood.
> It’s likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.
* In the last few years, MIRI has produced a number of novel papers, and been increasingly successful at getting mainstream academics interested in their work. The research priorities document attached to the previously mentioned open letter directly cited a number of MIRI's papers, including their recent research agenda. MIRI's representatives were also present at the invite-only "The Future of AI: Opportunities and Challenges" conference in Puerto Rico, where the open letter was drafted, and which collected together the top names of AI research.
Before the open letter and the conference, famous and influential academics who had cited, collaborated with, or favorably mentioned MIRI or their work already included the philosopher David Chalmers, the mathematician John Baez, and the co-author of the world's most used AI textbook, Stuart Russell.
In summary, it can be said that MIRI is currently strongly connected to the elite names in the field of AI research, as well as more or less endorsed by many of them.
* My past and current involvement with MIRI gives me some extent of insider access to information that convinces me they are working in an effective and rational manner, constantly refining their approaches based on new evidence. I've also put my money where my mouth is, making regular donations to them for a long time.
Links related to them and their fundraiser:
- Main fundraiser page: https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/
- Why now matters: https://intelligence.org/2015/07/20/why-now-matters/
- MIRI's approach: https://intelligence.org/2015/07/27/miris-approach/
- Fundraising targets #1 and #2 (#1 has already been reached): https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/
- Fundraising target #3: https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/
- Accomplishments in 2014: https://intelligence.org/2015/03/22/2014-review/