John Boesen: Left Brain Dominance in EA

“EA is better than traditional giving because it uses your brain”: a refrain you are no doubt familiar with. Effective Altruism, with its focus on numbers and impact free from bias, tries to avoid the emotion in giving, instead using reason. Really, though, EA doesn’t use your whole brain…it only uses half of your brain: the left half.

Ok, that is a little bit of an exaggeration, but it emphasizes an important issue: while we often refer to a brain as a brain, it is really two: the left and the right. The left brain is analytical, logical, detail- and fact-oriented, numerical, and likely to think in words. In comparison, the right brain is creative, free-thinking, able to see the big picture, intuitive, and more likely to think visually than in language.

You may have heard that this is an outdated model; it does become problematic if we simplify it too much, but research is still coming out (for example, from Iain McGilchrist) in support of this model.

It's not good to place our thinking too much in either half. However, many of EA’s flaws may result from just that:

First, the “e” in EA certainly does not stand for equity: the majority of adherents major in fields like computer science, philosophy, and math. Accordingly, EA is disproportionately white and male. This is not problematic by itself, but is in its consequences: the imbalance in majors and demographics limits our outreach. What do these majors have in common? They are all very left-brained—coldly logical and focusing on quantifying and manifesting abstract ideas, minimizing ambiguity.

This is the modus operandi of the left brain. In split-brain experiments where people only had their left brain working, participants denied their left hand was their own, even when shown evidence to the contrary. Why? They could not understand its relation to its owner.

This phenomenon is called alien hand syndrome, and it shows the left brain’s predisposition against things it does not understand (for example, the mysterious relationship between the user and the hand). If you were thinking “I don’t see why it is problematic to have a left-brain dominated EA movement …”, this extreme striving to put everything in a perfectly coherent, quantifiable box is why. This imbalance leads to the oft-cited streetlight fallacy. The left brain hates ambiguity.

What is the opposite of ambiguity? Whatever lies under the lamp’s lonely light. You’re probably familiar with the consequences of the streetlight fallacy, so I won’t say too much on this, but, in brief, there are likely paradigm-shifting advancements on the horizon. We cannot assign an expected value to these interventions because we cannot assign a probability, so, as a result of the streetlight fallacy, they get left in the dark, along with (presumably) the flourishing of billions of people.

This is an opportunity cost, but there are also more salient harms that this can cause. EA is becoming an extraordinarily consequential philosophy among the global elite (and by that I especially mean the future global elite, i.e. students at the world’s top universities). By over-prioritizing streetlight-conducive cause areas, we ignore pervasive, slow-burning, structural issues. Take income inequality: income inequality is objectively destabilizing to a country (and, of course, has a detrimental impact on quality of life). Destabilized countries often fall to pure democracy, which turns into tyranny, which not only means immediate suffering for the citizens, it also means a higher probability of misusing one of the more and more powerful weapons humanity is on the cusp of developing. You might object, “Yeah, yeah, but you have nested several relatively small probabilities inside one another! That is supremely improbable!” On its own, of course it is, but there are many other structural changes intertwined with this, and this is only one chain of events that could lead to catastrophic consequences. Objectively speaking, our forecasting ability right now is not terribly good, so we can’t foresee all the ways the little flaws in society right now could spiral out of control. Instead of focusing on those, we focus on some myopic pursuits like bed nets and other hyper-hyperopic pursuits like AI safety (and this is coming from someone interested in AI safety, so I can say that). Ignoring and being unable to forecast these downstream effects has large opportunity costs.

So, what is the solution? Like any good effective altruist, I will go out of my way to emphasize my uncertainty: I don’t know. However, it seems like, in the short term, the solution can include de-emphasizing numbers. Sure, we CS/Phil/Math majors love hard quantities, but the man on the street might not even know what an opportunity cost is. Outreach and intermediate-level media can be tailored not only to those we are already targeting (implicitly) but also to the vast swath of the population we are ignoring through less quantitative and more qualitative imagery. I think this is particularly important in intermediate-level media: while the doorways into EA seem to be near the average reason/emotion balance, the first few steps of the path get much more left-brain lopsided (for example, 80,000 Hours). This deters even slightly more-than-average-level right-brained people from EA, which ultimately results in worse decisions. By rectifying this imbalance, we can bring in altruists more diverse in their ways of thinking, and, in the long term, broaden our focus and bring together the tractable short-term issues with the systemic long-term issue–bring about a balanced and even more Effective Altruism.