Discussion of Utility, Fairness, and Risk with Medical Ethicist Nir Eyal
Nir Eyal is Associate Professor of Global Health and Social Medicine in the Center for Bioethics at Harvard Medical School and at the Department of Global Health and Population at the Harvard T.H. Chan School of Public Health. He is co-editor of the Oxford University Press series Population-Level Bioethics, and (among other things) chairs the Committee on Philosophy and Medicine of the American Philosophical Association. Eyal’s writings fall primarily in clinical, research, and population-level bioethics. He wrote on ethical questions surrounding HIV research, health worker shortages, healthcare rationing in resource-poor settings, informed consent, personal responsibility for health, fair risk distribution, and accrediting corporations for improving global health. Beyond bioethics, he researches egalitarian theory and consequentialism. Eyal also serves as the faculty advisor of Harvard College Effective Altruism.
HEA: How do we define ‘effectiveness’? What is the relationship between ‘effectiveness’ and ‘fairness’ and how do we balance these two considerations?
Eyal: In my view, fairness matters, and it matters independently of effectiveness understood as maximizing utility. I am a philosophical egalitarian. An egalitarian means someone who cares at least some for equality. For example if we can improve the utility or the health of people who are worse off, say with very bad lifetime health or income, and improve by the same amount the utility or the health of people who are very healthy or wealthy, in my view helping those who are worse off is preferable. One reason is simply that they are worse off. Some effective altruists don’t share my position, but for our purposes this disagreement is not very important, for two reasons. One is that money and other fungible resources have decreasing marginal value, so a poor person typically benefits more from an extra dollar than Bill Gates does. Another reason why there often is agreement between utilitarians who don’t value equality intrinsically and people like me who do is that the most effective things one could do to improve wellbeing happen to address the problems of some of the world’s poorest populations. As Toby Ord emphasizes, a band of interventions such as child vaccinations and mosquito nets have cost effectiveness that far surpasses that of most other interventions currently promoted, by an order of one thousand or more. Even if you give some independent weight to equality, surely you are not going to give it the kind of weight that could override that kind of advantage in cost effectiveness. So in practice we can work very well together.
Another lesson from this. which I don’t think people understand enough, is that you do not need to be a utilitarian in order to be a serious effective altruist. As I said, I’m not a utilitarian. A lot of Harvard philosophers are anti-consequentialist; they think that there are some methods for promoting good consequences that remain wrong. For example, to promote good causes by torturing innocents or by lying to people can remain wrong. But they can all be effective altruists because the “band” of interventions we just talked about are of the sort that any sane person would promote when there no gross violation is demanded. It’s very rare for promoting these causes to require gross violations of individual rights or to be grossly opposed to equality. In short, every reasonable person could be and should be an effective altruist. Even if you give some weight to non-consequentialist considerations, it’s rare to find an ethicist who would just leave on the sidewalk a check for a thousand times more impact, in the name of avoiding a minor transgression or accomplishing any improvement in equality.
HEA: How do we determine utility in dimensions beyond mortality or quality of living? For example how do we quantify the value of saving a child’s life versus making a blind person see again? Is valuation subjective in the sense that we care more about causes that we are connected to through experience?
Eyal: Health economists and epidemiologists already have techniques to try to assess overall impact. They are already in the business of comparing the prevention of mortality and that of blindness, for example. These techniques sometimes involve surveying a large population regarding which of two health states they would prefer, say a small likelihood of dying and a high likelihood of a certain degree of blindness, and then doing careful epidemiological work to glean a general lesson about the weights that should be assigned to the different states. There are many problems with the prevailing techniques for doing that. Experts are working on improving these techniques, but the bottom line is that we are already [ assigning weights to different health states ], so effective altruists could potentially use existing strategies or hone them.
HEA: Do organizations like GiveWell evaluate effectiveness only based on how effective they are at achieving their goals or also on how important their goals are?
Eyal: The latter. GiveWell try to find how much impact on well being different organizations have. That’s very fortunate because it’s very easy to be effective at achieving one’s goals — all one needs to do is to find an easy goal to meet even if that goal has no importance whatsoever, or other organizations have already achieved it. GiveWell use findings from research groups such as the Disease Control Priorities second iteration, called DCP2, which gave rough numbers on the cost-effectiveness of various health interventions in developing countries. For instance, what’s more effective for combatting Cholera: giving vaccinations or teaching nurses to wash their hands? Colleagues at Harvard are promoting Cholera vaccination, but DCP2 quotes the effectiveness of teaching nurses to wash their hands at 1000 times greater.
HEA: How should investing in novel, research-based solutions be balanced against tackling current, low-hanging-fruit issues using known solutions? It seems that a lot of global development work does not emphasize scientific innovation.
Eyal: Delivering now something that’s known to work versus delivering later what is currently only experimental, if and when it gets proven, involves two differences. One is in the level of “gambling” and the other one is between close in time and farther along in time. I think there is no intrinsic value in being risk averse and in showing value in the nearer future [ as opposed to in the later future ]. But the answer will differ according to the details and the numbers in concrete cases.
HEA: When making explicit models for quantifying the value of causes, should we always embed complications like risk aversion and time value of money calculations (as captured in most economic models)?
Eyal: You are right in that these are issues of ethical complexity here. A related, third topic of ethical complexity: how to count deaths at different stages of life? Should we count the death of a one day old baby expected otherwise to live many years as a far worse tragedy than the death of a college student expected to live a few less years forthwith? This requires careful consideration.
HEA: Will we ever agree on one answer for these questions? Or perhaps we’ll have to live with uncertainty?
Eyal: One possible policy is to prioritize the areas where genuine uncertainty shows up less, so that we are relatively immune from uncertainty. By “uncertainty,” I don’t mean a matter of actual public debate. Many public debates are due to simple misunderstanding, or money infusion into the political debate. I mean genuine, deep doubt among highly informed people. One reason why I’m excited about working in global health is that there is very little doubt about the urgency of the basic causes being served—promoting the health of relatively unhealthy and poor people. Little if any uncertainty about the value of that.
HEA: What are some difficult open questions related to global health?
Eyal: One hard question is how to allocate organs for transplantation. You could prioritize the patients who are likely to benefit the most from the organs, that is, those whose prospects would improve the most from receiving them. You could alternatively prioritize those patients who have been registered on the waitlist or on dialysis the longest time, which is how in the US kidneys are currently being allocated. You could also prioritize patients by how ill they would be without receiving the organ, that is, give the organs to severely ill patients. Ethicists try to find good arguments for and against different distributive principles.
HEA: How do cultural considerations factor in?
Eyal: Culture enters the picture, because it affects what intervention would work and which would fail. Furthermore, if for cultural reasons people feel utterly violated by an intervention that would improve their health, that’s not a happy scenario. Effective altruists should take that into account even if, like myself, they are not cultural relativists.