The World Economic Forum says that the work I'm currently doing for my employer is (or at least "may be") morally obligatory:

> Some serious thinkers fear that AI could one day pose an existential threat: a “superintelligence” might pursue goals that prove not to be aligned with the continued existence of humankind. Such fears relate to “strong” AI or “artificial general intelligence” (AGI), which would be the equivalent of human-level awareness, but which does not yet exist. Current AI applications are forms of “weak” or “narrow” AI or “artificial specialized intelligence” (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned. [...]

> Scholars, philosophers, futurists and tech enthusiasts vary in their predictions for the advent of artificial general intelligence (AGI), with timelines ranging from the 2030s to never. However, given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.
Shared publiclyView activity