Alison Xin: More Right: Cooperation Between Intuition and Rationality
You are wrong.
Though packaged in justification and softened by rhetoric, this blunt message underpins the rationality promoted by Effective Altruism. This is illustrated through the concept of “shut up and multiply”, explained by Eliezer Yudkowsky in “The ‘Intuitions’ Behind ‘Utilitarianism’”. In response to an example of a perceived failure to optimize maximum happiness, he states “that the brain doesn't goddamn multiply. Quantities get thrown out the window,” concluding that default human intuition cannot be relied on to be rational in high-stakes situations.
This claim is not difficult to prove. Intuition is messy, liable to change based on the time of day, the wording of a question, or any combination of countless other unknown factors. It is not difficult to find examples of inconsistency, ranging from common occurrences (like repeatedly pushing on a pull-only door) to responses to thought experiments (like the variations of the trolley problem). Therefore, we need rationality to take over, thereby avoiding knee-jerk human intuition and constructing a systematic approach to achieving goals. For a specific example, Effective Altruism approaches charity as a process of optimization, shifting focus from warm fuzzies to empirical evidence.
But many see this approach as mechanical and off-putting. Though it is easy to agree that humans are frequently irrational, it is still difficult to conclude that a strict system of rationality would be a better option. To a rationalist, this divide seems inexplicable, as irrationality is inherently undesirable. For many others, though, this conclusion is not entirely trivial.
You are wrong irrational.
Stepping back from Effective Altruism, many of the core elements of rationality clash with human intuition. A clear example is the repugnant conclusion, the idea that if we systematically compare welfare between groups, a massive population of people whose lives are barely worth living is as good as a small population of people with excellent quality of life. A thorough explanation can be found in the Stanford Encyclopedia of Philosophy.
The “obvious” response is to conclude this is wrong, but it is challenging to logically characterize why we reject the repugnant conclusion. Approaching from different angles, like arguing over the definition of or quantification of welfare , does not produce airtight counterarguments (the Stanford Encyclopedia further elaborates on these claims). Because we cannot unite the reflexive denial of the repugnant conclusion and a rational line of reasoning, the rationalist would accept repugnant conclusion. This reaction, though, is not representative of the general population. Even after seeing the steps of the repugnant conclusion laid out, many people will not renounce their initial impression that the conclusion is mistaken.
Intuition is a black box: we receive some input, and by some unknown set of calculations, we produce an output. Is this enough reason to reject the output? Is the fact that many people end up agreeing on intuitive arguments enough justification to accept the answer as correct? If we assume that humans evolved to be socially coherent, it may be relevant to assume that intuition will guide the majority of people to make the right choice. Therefore, we may be able to make the case that in this situation we can reject rationality and opt for intuition.
You are can be wrong irrational.
When the problem involves quantities of interest can be objectively measured, prefer rationality. This includes resource management, e.g., deciding to donate to a charity that consistently saves more lives per dollar spent than others. When the problem is a moral or values-based question, prefer intuition. This applies to the repugnant conclusion, which is a question of how to build an ideal world. Unfortunately, many problems have both quantitative and qualitative components, so determining whether the solution is rational or intuitive may be difficult to determine. However, even an ambivalent approach may still be useful.
In early artificial intelligence, there were the Neats and the Scruffies. The Neats sought rule-based artificial intelligence capable of finding solutions with clear lines of logic. Scruffies, in contrast, cobbled together solutions that could make use of ad hoc rule-making; as long as it worked, the machinery under the hood was not especially important. Though the divide still persists, artificial intelligence research progressed by combining elements of both viewpoints. Likewise, neat rationality need not preclude scruffy intuition. Answering difficult ethical questions with gut instinct may seem like a crutch to avoid rigorous analysis, but it is a crutch that has supported societies for millenia. As Effective Altruism systematically tackles enduring, complicated issues, intuition may still be a useful tool to lean on.
___
axin@college.harvard.edu