Leor Fishman: The Case Against Edge Cases
Intro: The case for edge cases:
Before arguing that edge cases -- places where our initial intuitions violate consistencies implied by our formalized moral systems (examples include lying to a murderer for Kantians, utility monsters and the repugnant conclusion for some segments of utilitarianism, and unexpressed virtues for virtue ethicists -- all branches of moral philosophy love the edge case) are terrible, we should, of course, establish why people think edge cases are good. The primary basis for this opinion is the belief that morality acts like a deductive, formal system -- which is, on its face, not an unrealistic belief! After all, pretty much every moral code ever written has been formal/legalistic, and we should expect human moral codes to approximate human moral philosophy. So, how does this formalism take us to edge cases? The answer is relatively simple: Formal systems in general come along with the property of consistency -- i.e., if our formal moral system judges X to be more moral than Y in one circumstance, it should judge X to be more moral than Y in another circumstance that doesn’t affect X or Y. This, then, is why moral philosophers like edge cases. Similarly, for those people who believe that morality was sourced in some higher power, they often view morality’s simplicity/formality as a necessary part of the abstract moral perfection presumed of such a being (since beauty, parsimony and simplicity are often associated in abstract systems). These edge cases provide formalists a way to codify their moral systems by picking either to side with the intuition (and therefore declare the moral system ‘refuted’) or to side with the system (and declare the intuition immoral). This is edge casing at its most basic.
Morality as complex machine:
So, the above is the formal view of morality -- and based as it is in human moral codes, it seems perfectly reasonable. There’s just one problem with it: evolutionarily, it’s completely wrong. Morality most likely developed as the confluence of a number of different pieces of ‘mental machinery’ -- e.g. some to give us empathy, some to give us compassion, some to give us guilt, etc. All these pieces of machinery work in concert to produce our sense of morality, and yet they are all separate mechanisms -- I have 3 reasons to believe this. First, the issue of separate development -- we observe animals with varying extents of altruism and kin selection, and more complex animals with varying amounts of experienced compassion -- so clearly, morality cannot be one fully formed machine that’s just ‘dropped’ into the human psyche. Now, the reader may contend that there’s no way we know that this was necessarily the evolutionary path of human morality. I respond with the evolutionary fact that complex machinery/mechanisms evolve over long periods of time, with each piece individually useful. Consider, for example, the eye: first, organisms are photosensitive, then develop the ability to distinguish contrast, then stereoscopic vision, and only much later color. Morality, which is doubtlessly a complex mechanism, must have developed similarly. Similarly, our morality might have started with altruism, then guilt, then compassion, etc. Now, there is still one more contention -- that, even though morality initially evolved, it may now be encapsulable as a single formal system, constructed of simple axioms without which the system collapses. However, this is unlikely enough to not be worthwhile considering -- what are the chances that a system independently developing from several angles and mechanisms simultaneously happened to reach some massive local optimum of compressibility and formalizability? (For those who find this unconvincing, it may be more instructive to look into the extent to which neurodivergence affects people’s disgust intuitions and several other foundations of morality -- the effects there are significantly more subtle than would be expected of a system built on formal rules).
Must we remain internally consistent?
Now, I already know what some of you are saying -- so what if in reductive reality, the human brain is complex and nonformalizable? All formal moral systems are meant to be is an approximation to what we value. What’s wrong with sacrificing some detail to get out compressible predictions of moral judgement? To that I respond, go ahead! If all you want your moral philosophy for is as a reasonable model of human morality, then use it as one -- but doing so ensures that edge cases lose their rhetorical power. Under a reasonable model/informal approach to human morality as a complex, multifaceted system, all an edge case should tell you is ‘hey, my model is imperfect’. For the physicists among you, it’s like the slight imperfections in Mercury’s visible location in 1915 that demonstrated Einsteinian relativity -- all they told us was that Newtonian mechanics was imprecise and imperfect, not that it was wholly unusable. In the same way, edge cases should suggest to us imprecisions in our models of human morality, not refutations of those models wholesale.
In conclusion, human morals are biology, written in spaghetti code on wetware, built of millions of years of quasirandom iterative optimization. Stop treating them like formalized logical systems.