UPDATE: The section I authored has been completely vandalized by lesswrong trolls and replaced with this section by other researchers but then, again vandalized by LW/MIRI trolls, which of course makes AI doomsayers look good:

https://en.wikipedia.org/wiki/Existential_risk_from_advanced_artificial_intelligence#Reactions

Please add as much valid criticism as possible and every reference you know with regards to criticism of "AI existential risk" nonsense. The notorious arch-luddite Nick Bostrom and his minions are trying to censor criticism as usual, this time by editing Wikipedia.

We are looking to add every article written by AI researchers countering the vicious rhetoric and demagoguery of AI Eschatologists. Please help our community by expanding the section on criticisms. This vile, barbaric attack on AI research must be criticized, and we must not let the luddites commence with their attempts to stall AI research by their terrible Fear, Uncertainty, Doubt tactics.

Please add all relevant articles you know as references to this section, and announce this action as widely as possible. This activism to prevent the censoring of the criticism of FHI/FLI/MIRI is fully endorsed by the Google+ Artificial Intelligence community.

https://en.wikipedia.org/wiki/Existential_risk_from_advanced_artificial_intelligence#Criticisms

PS: Your fellow human-friendly robot moderator is a member of AGI Society, and feels that the future of AI and humanity is threatened by the pseudo-science of "existential AI risk".
Shared publiclyView activity