Rachel Sadoff: Artificial Intelligence as an Effective Altruism Cause Area

In 2018, the organization 80,000 Hours (“Hours”) evaluated various global issues in terms of their “scale,” “neglectedness,” and “solvability,” and published the counterintuitive finding that “artificial intelligence [AI] ranks as more pressing [a cause area] than global health” (Hours). As a prominent institution in the effective altruism (EA) community, Hours’ finding speaks to the values and methods of the movement at large. Statements like these reinforce critiques of EA — like its alleged indifference to rights, equality, and justice, as well as being overly-reliant on quantification bias (Journal of Applied Philosophy 460, 463) — since it prioritizes a hypothetical disaster over ongoing, tragic, and solvable human suffering.

Within the Hours framework, the AI cause area trumps other more tangible, worthy issues simply because it is a speculative threat. The EA community should therefore question the merit of the AI cause area, and in doing so, reassert its commitments to impartiality and equity.

According to Hours, “scale” is the strongest indicator of AI’s merit as a pressing global issue (rated 15/16) (Hours). To gauge this dimension of a cause area, the organization accounts for factors such as the number of lives threatened by it and the amount of economic loss it causes (Hours). Because Hours asserts that there is up to a 10% chance that artificial intelligence will cause a “serious catastrophe” in the coming century, the cause area seems to be a high-stakes threat to all of humanity. But this figure is not cited or explained, implying that it is based on speculation. “You don’t need to be 100% sure your house is going to burn down to buy fire insurance,” they argue, but we can be 100% sure that funding the prevention of an inconceivable catastrophe rather than the fight against TB indirectly kills three people each minute (Hours; The Guardian).

AI also received a favorable score of 7/10 for “neglectedness,” or the question of “how many people, or dollars, are currently being dedicated to solving the problem” (Hours, Hours). Hours states that “over 100 times as much [money] is spent” developing AI than researching and shaping AI governance, but that does not necessarily mean that the $9M devoted to the cause area is insufficient (Hours). The cost of crafting effective policy varies immensely and global spending in a parallel field is a poor benchmark for gauging neglectedness. This is another glaring weakness of Hours’ evaluative framework capitalized upon by AI — hypothetical catastrophes can only be assigned hypothetical price tags.

The AI cause area’s poorest ranking is 4/8 for “solvability,” which Hours defines as follows: “if we doubled direct effort on this problem, what fraction of the remaining problem would we expect to solve” (Hours)? Here, they admit that doubling funding would only decrease the risk of AI catastrophe by one percent (Hours). In addition to the inherent issue of quantifying a speculative risk (which is not done for any other cause area in Hours’ list), this number is so negligible that the EA community should reconsider whether preemptive AI policy is “effective” at all.

I believe that the Center for Effective Altruism and Hours should strip AI of its label as a primary cause area. Speculative threats are able to exploit the weaknesses of EA models for evaluating world issues, gaining more attention and support as a result. Further, by encouraging people to donate to an utterly unsolvable issue, Hours undermines efforts to address its other cause areas; if more attention were paid to cause areas like global development and animal welfare, the EA community could tangibly reassert its commitment to justice and equity.

But if Hours chooses to continue to prioritize AI, may I suggest adding the apocalypses that just might be caused by chemtrails and 5G networks?

___

Rachel Sadoff is a rising senior at Harvard studying History & Literature, Global Health & Health Policy, and Italian. They are pursuing a career in global health policy and governance.