Aditya Balakrishnan: Is Existential Risk a Black Swan?

In recent years, preventing the threat of existential risk has become one of the top priorities of the effective altruism movement. Catastrophic large-scale events which could end all life on Earth as we know it are scary – and it is this fear of their magnitude, and the stubborn belief that we can do something about it, which is skewing large proportions of donations and research efforts within EA towards existential risk causes. While x-risks create the greatest negative impact if they do occur, it is the lack of knowledge of their occurrence (in terms of both time and nature) that we tend to ignore, instead using improper and inaccurate probabilistic models to make broad claims about the future based on highly specific knowledge.

Effective altruists all acknowledge that small probabilities govern existential risk. The probability that an asteroid is going to wipe out humanity in the next 10 years is incredibly low, as is the likelihood of artificial intelligence transforming us all into paperclips. But we choose to pick specific models/theories that are available to us, or which seem more likely to happen, given the nature of the past, or the nature of technological advancement. 

I like to imagine existential risk as an ocean. Hostile AI or asteroid strikes are not bogus claims. But they are drops on the surface of a massive ocean. The surface contains all that we predict/fear about x-risk: all the problems that we have identified as potential harbingers of our Doomsday. The surface itself is vast; theories flow in from everywhere, and it is hard to discard anything when it is based on futuristic speculation. But there is an entire ocean beneath the surface which we have not even seen yet. It is impossible for anyone to ever encompass its depths and determine the best ‘spot’ – most of it is unknowable, but just as potent as anything on the surface. Much the same way, there are an infinite number of potential x-risks. We start from a zero-knowledge state - extrapolating a couple of claims as the most dangerous would be attributing undue power to flawed predictive models embedded in cognitive biases.

In other words, the Sun may explode tomorrow. Dogs may mutate into poisonous monsters. A nihilist may get access to the nuclear launch codes and gratify his purposelessness. It is possible to quantify claims in some order of likelihood, but there are still an infinite number of likelihoods, as well as the possibility that multiple x-risks synergistically unfold, leading to a more gradual extinction. 

There is also the issue of limited knowledge, or like your yoga teacher might say – “living in the present”. If EA was big 50 years ago and people were studying x-risk, nobody would have anticipated the rise of AI – anyone suggesting it would be making a bogus claim, just like if I were to tell you today that every human born 20 years from now would be homicidal. Prescience is underappreciated in its time, which is not as bad as you might think – for every genius explaining the dangers of AI in the 70s, there would be a hundred ‘experts’ telling you that fertilizers would make us braindead, or that vaccines would destroy fertility and end the procreation of the human race. The discovery of new risks is inevitable, which means devoting present resources to presently estimated risks would be a tremendous waste, if something bigger and far worse were to come along in 20 years.

When it comes to estimating risk, the anthropic principle is both its biggest fan and its arch nemesis. The anthropic principle maintains that the Universe we currently exist in is one among infinitely many within a multiverse, because for us to live and thrive in the manner that we do, everything needed to fall into place. This reality is fine-tuned for us to survive in, just by mathematical chance. If we were in any other universe, or if the laws of nature and the cosmological constant were not exactly aligned in the manner that they are, our lives would be radically different, or even potentially non-existent. This means that the multiverse is compelled to have some life in it – we are the product of a survivorship bias. 

Put simply, if anything goes out of whack, we are kaput. 

This highlights the actual magnitude of existential risk. We are often victims of the same survivorship bias that potentially created us – we underestimate risks that we have survived, or risks that seem to be ostensibly minimal. On a local level, we ignore the impact of such risks on those who did not survive, overlooking their negative outcomes. On a universal level, this means that we overlook a significant number of risks within the realm of possibility, because we have simply never observed existential risk as a civilization. On another universe, a simple action may have ended an extraterrestrial civilization. Here, it may be the equivalent of a minor earthquake. It is an extended survivorship bias – we survived the risks that we have never faced.

While the anthropic principle argues for the prevalence of x-risk and our tendency to ignore it, it also highlights how incredibly hard it is to predict. If we have nothing to go by, then our predictive models are based solely on what we have seen, and what we have seen cannot compare to the real thing. Anthropic reasoning leads to an important conclusion – the Universe is governed by randomness, and we exist because of that randomness.

The derivatives trader-turned-philosophical essayist Nassim Nicholas Taleb calls this the ‘ludic fallacy’ – the universe is influenced by countless random variables, and we inaccurately tend to structure randomness to model real-life situations. While Taleb uses the ludic fallacy to refer to issues of financial modeling or economic predictions, I think it gains even more weight in the domain of x-risk. Real life is not a closed game that can be modeled out statistically like a night of poker at Caesars Palace. It is an incomplete information game with an unknown outcome. This fallacy is exacerbated when we try to model future lives - we dramatically underestimate randomness due to our impatience and fear of mortality.

Taleb uses the ludic fallacy to expand on his theory of black swans, i.e. consequential and rare events that are incredibly difficult to predict, but which people attempt to rationalize after their occurrence. These events have occurred significantly throughout human history, and most models have failed to account for them – the most notable being the 2008 financial crisis. The post-fact rationalization is our subconscious attempt to control randomness, by arguing for the obviousness of such an event after it has taken place.

Again, while this theory is usually used for economic recessions and medical anomalies, I believe we can extrapolate it with even greater relevance to x-risk. Global catastrophic events are most definitely black swans. The entire world acknowledges that they are rare and highly consequential. And as I have discussed, they are incredibly nuanced and impossible to predict on account of the variables that govern them. However, a post-fact rationalization is impossible because we would be extinct after the fact. We instead attempt to rationalize it now, using models that misunderstand probabilities, and relying on cognitive biases about the power of our information. 

Effective altruism exists because we all have limited resources, much like the world we inhabit. There are a massive number of global problems with mitigatory mechanisms and solutions in place, that we can affect right now. Poverty alleviation, global health – they have an end in sight. It would be a shame if we devoted our resources instead to causes that are incredibly unknown, constantly changing, and influenced by the almighty power of uncertainty. It takes away funding and research from projects that can be more practically benefitted, and it reduces our effectiveness, which is the equivalent of blasphemy for your average utilitarian.

Embracing existential risk as a black swan helps us ignore it. And that is a risk worth taking.


———

Aditya is a freshman from India, studying Economics/Math/Philosophy at New York University Abu Dhabi. He is passionate about poverty alleviation and health, and loves discussing the hypocrisy of policymaking and the fate of the universe.