Effective altruism has seen much welcome criticism that has helped it refine its strategies for determining how to reach its goal of doing the most good—but it has also seen some criticism that effective altruists reject. We want to correct some misconceptions that we’ve become aware of so far.
In evaluating interventions, charity prioritization typically relies on the criteria of tractability, scalability, and neglectedness. The last two of these turn charity prioritization into an anti-inductive system, where some arguments along the lines of the categorical imperative become inapplicable: You recommend an underfunded intervention, then people follow your recommendation and donate to it, then it reaches its limits in scale, and finally you have to withdraw your recommendation as it is no longer underfunded.
Imagine you are organizing a large two-day effective altruism convention. There are several hotels close to the venue, one of which is very well known and soon fully booked. Panicked attendees keep asking you what they should do, so you call the other hotels in the vicinity. It turns out there is one that is even closer to the venue with scores of rooms left. So you send out a tweet and recommend that people check in there. They do, and promptly the hotel is also fully booked, and you have to do another round of telephone calls and update your recommendation. But it is fallacious to argue that your first recommendation, or the very act of making it, was wrong to begin with just because if everyone follows it, it’s no longer valid. That’s in the nature of the beast.
Because the “if everyone did that” argument so common, let’s give it a name: “Kantian fallacy.” The “buy low, sell high” rule of the stock market is not wrong just because if everyone bought low, the price would not be low, and if everyone sold high, the price would not be high. Advising against the overused typeface Papyrus is not wrong just because if no one used it, it would no longer be overused. Surprising someone with a creative present is not wrong just because if everyone made the same present, it would not be surprising.
The last analogy is less fitting than the previous ones, because we don’t actually want good interventions to be underfunded. When all the governments and foundations see how great deworming is and allocate so much funding to it that hardly a worm survives, then any recommendation for more donations for deworming has to be withdrawn in favor of more neglected interventions—but that’s a reason to party!
So what would recommendations look like when malaria, schistosomiasis, and poverty are eradicated or eliminated to the extent that other interventions become superior? Or what would happen when other bottlenecks thwart further effective donations in these areas?
That future is already sending its sunny rays into the past as foundations like Good Ventures and the Gates Foundation already have much more funding available than the currently known top charities could absorb. What happens is that doing good either becomes more expensive (when you trade off cost-effectiveness for scalability) or more risky (when you trade off certainty for other qualities). The latter is the more interesting and encouraging scenario. More on that in the next section.
Finally, the arts bestow great pleasure and edification upon people, and works that effect political or social change for the better may also be cost-effective, but not all art is targeted that way. For the most part, saving a life or allowing a hundred children to go to school will do more good than a painting in a gallery. However, there’s another way to think about this. “Whoever saves one life saves the world entire,” or at least all the experiences of the world that their new life will now allow them. So by that logic, isn’t the act of giving a person access to art—say, by lifting them financially over the level of mere survival—an act of artistic creation itself? Many of us have longed to create something as magnificent as Shakespeare’s works, but few will succeed. By giving another person access to these creations or similarly great ones of their own culture, we can all create them anew.
As a pioneer in the field of charity prioritization, GiveWell had a herculean task ahead of it and very limited resources in terms of money, time, and research analysts. (This was years before effective altruism had consolidated into a movement.) Since funneling more donations to above-average charities is already better than the status quo, the team quickly learned that as a starting point they had to focus narrowly on cause areas marked by extreme suffering, interventions with solid track-records of cost-effective and scalable implementation, and charities with the transparency and commitment to self-evaluation that would enable GiveWell to assess them. As it happens, these combinations were mostly found in the cause areas of disease and poverty. This decision was one of necessity at the time, but soon later, GiveWell managed to scale up its operations significantly, so that these restrictions no longer applied.
Some of the best and most cost-effective giving opportunities may well lie in areas or involve interventions that are harder to study. Hence GiveWell has been investigating these under the brand of the Open Philanthropy Project (initially known as “GiveWell Labs”) since 2011. (That’s still before effective altruism had a name or called itself a movement.) Much scientific research promises great cost-effectiveness; so do some interventions to avert global catastrophic risks and to solve political problems. Doing the most good may well mean investing somewhere in one of these areas—where exactly, Open Phil has set out to find out.
In 2012 the effective altruism movement got its name and consequently consolidated its efforts at doing the most good. Nowhere in the agenda of the movement it said that effective interventions needed to be easy to study or quantify. In fact opinions and preferences on what “the most good” means in practice vary (yet some are shared by almost everyone). According to a 2014 survey, about 71% of the EAs in the survey sample were interested in supporting poverty-related causes, but almost 76% were interested in metacharity including rationality and prioritization research, which are not easy to quantify. About one third to fourth were interested in each of antispeciesism, environmentalism, global catastrophic risk, political causes, and (nonexistential) far future concerns, most of which are hard to study. There is by no means an unwarranted bias toward interventions that are easy to study; if anything, there’s a surprising tendency towards speculative, high risk–high return interventions.
Finally, GiveWell not only doesn’t “implicitly recommends that one should support charities only in [the cause areas of global health and nutrition]” but (1) recommends a charity outside these areas, (2) writes on every charity review that did not lead to a recommendations that the “write-up should not be taken as a ‘negative rating’ of the charity” (emphasis in original), and (3) gives reasons for why a philanthropist may legitimately choose not to donate to their recommended charities right on their Top Charities page.
Admittedly, most effective altruists are utilitarians or consequentialists. If you want to maximize happiness and minimize suffering in the world (or maximize informed and rational preference satisfaction), then it’s clear how effective altruism follows. But there are many ways to reach similar conclusions from a deontological perspective, or from within other ethical world views.
Take John Rawls in the book "A Theory of Justice". In order to find out how social institutions should be set up in society, Rawls asks us to imagine ourselves as rational and self-interested agents in what he calls the original position, where we are behind a (metaphorical) veil of ignorance. Behind the veil you do not know certain facts obout yourself and your place in society. For example, you do not know whether you are rich or poor; driven or lazy; intelligent or unintelligent. You do not know anything about yourself that might tempt you to set up social intitutions in a way that favors yourself. From this imagined position, you then ask yourself how the institutions of society should be set up if they are to be just. Apply this to the current world order, let us assume for a moment that we do not know whether we are someone who does not have enough money to buy food or not, but we do know that there are many who do not. (You do know general facts like these.) Would you as a rational and self interested agent accept a set of social institutions that made it possible that you would die, go blind or experience excruciating lifelong agony because you did not have access to drugs that cost less than what many others earn in ten minutes? It seems clear that if you are rational and self-interested you would not, especially since the chance of you being in this unfortunate category is greater than that of being among the lucky few that are affluent.
Other versions of deontology can also accommodate EA, Rossian pluralism for example. W. D. Ross was an important deontologist whose primary contribution to deontological ethics is arguably the concept of a prima facie duty, a duty which can be superseded by another duty. Let us say you have a duty to uphold your contract or promise, but you also have a duty to be beneficent, that is, to improve the situation of others. You probably should not give away money you need to pay your mortgage, according to Ross, but you should still give money to people that need it, and it stands to reason that that money should be given in a way that is effective.
Kantian ethics is a hard case, and probably the least likely candidate for being an EA, but even within Kantianism there are some reasons to become an EA. Kantians generally believe that charity is what they call an imperfect duty, that is, you should give to charity but how you give is up to you. There is, at least initially, no duty to give in any specific way or any specified amount. Kant was also skeptical of charity because he thought it might be detrimental the self respect of the receiver of the aid. Also, it is extremely important why you give to charity, according to Kantianism you have to give to charity because it is your duty, not because you want your picture in the paper for example.All this being said, giving a lot, and giving well can be seen as good supererogatory actions (above and beyond the call of duty), and some would say that this gives you a normative reason to give a lot, even if it is not your duty. Furthermore, giving one dollar once in your life might be seen as merely finding a loophole, and not truly fulfilling your duty, so that raises the question of how much you should give as a kantian, and there is no reason why that could not be quite a lot. And, of course, there is no reason to give badly.
According to the 2014 EA survey, there are even more virtue ethicist that are EAs than there are deontologists, and there is no reason a virtue ethicist should not also give effectively. First of all, if we ask what a good, or excellent charity might be like, it seems reasonable to think that a good charity is one that is good at improving lives, since improving lives can be said to be the function of a charity. Furthermore, it is commonly accepted that compassion and the ability to empathize with others are virtues. It is important to have each virtue in the right amount and the right way, and you could clearly argue that compassion should not stop at the borders of your own country, and that an ability to feel empathy with people that are not immediately close to you makes a more excellent virtue than if you can only feel it for those you see, love or have some other relationship with. This does not mean that it cannot be better, from a virtue ethical perspective, to care more for your close ones, it only means that the difference has to be proportional, and you might say that caring so little for those that are distant from you that they die in vast quantities is caring too little for them, especially since it does not make much of a difference to your close ones if you give a little bit more, say 10%.
There are many other ethical views besides consequentialism, deontology (ethics of duty) and virtue ethics. there are nihilism, relativism, religious ethics (divine command ethics for example), particularism, feminist ethics, the ethics of natural law and natural rights, as well as common sense morality which is basically just following your guts and thinking once in a while. The only one that clearly offers no route to becoming an effective altruist is nihilism which is the statement that there is nothing that is right or wrong, and/or nothing that has any intrinsic value. Then again it does not say you should not become one either.
Religious ethics is is also an interesting case, on the one hand it often tells you to do unto others what you would want them to do to you, the golden rule. At the same time they may believe in hell, in which case "saving" only one person would be more effective than saving the entire world from hunger, since suffering in hell is eternal.
It is entirely possible to be a deontologist or virtue ethicist and not believe in effective altruism, so a full ethical discussion on the subject will have to be much more expansive, but it should be obvious that there are good common-sensical reasons to join no matter where you stand ethically. This answer focuses on the effective giving of charity aspect of effective altruism, there are many commonly held beliefs within EA that do not exactly have to do with charity. Animal welfare and future generations for example, these are more controversial, but neither are they rationally required for being an EA.
See also this article on Effective Altruism and Consequentialism, which does not address this issue directly, but makes clear in passing that EA does not presuppose consequentialism, while trying to find particular beliefs within it that do and arguing for discounting those.
It is one of the unfortunate truisms of the human condition that no market is perfect, but the charity market is particularly and abysmally imperfect. If someone wants to buy a solid state drive they might check, among other things, the price per gigabyte. $.96 per gigabyte? Rather expensive. $.38 per gigabyte? Wow, what a bargain! When people want to invest into a company, they check out the company’s earning over the past years, compare them to the stock price, and decide whether it’s a bargain or usury. Or if you have a headache, do you buy a homeopathic remedy that does nothing for $20 or Aspirin for $5?
I wasn’t there when it happened, but I imagine when the first effective altruists wanted to donate they called charities and were like “Hi, I like what you do and want to invest into your program. Could you give me your latest impact figures?” I imagine the responses ranged from “Our what?” over “You’re the first to ever ask for that” to “We have no idea.”
When the charities that run the programs don’t even know if they do anything good or anything at all in proportion to their cost, then how are donors supposed to find out? They would have to draw on the research of experts in the field and, to some extent, would have to become experts themselves.
Prioritization organizations want to change that. They flaunt a pot of money promised to the charities that make the best case for being highly effective. That way they incentivize transparency, self-evaluation, and optimization. Eventually, we hope, this will encourage a charity market that makes it much easier for everyone to recognize the charities with the most “bang for the buck.”
Peter Singer (in an interview with Forbes) also puts the work of prioritization organizations into perspective: “You might make that same criticism of Consumer Reports. Their experts decide what washing machines or cars we should buy, but that doesn’t take away the right of the consumer to decide. Who would want to pay twice as much for a washing machine that doesn’t wash as well as a cheaper one? And why give to a charity that’s not effective?”
In a review of Peter Singer's book "The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically", John Gray argues that Effective Altruism is nothing new, and is basically the same thing as Auguste Compte's positivist church. He further claims that effective altruism resembles Comte's movement in that it treats ethics like a branch of science. Footnote 3
Note on the consequentialism section: See http://effective-altruism.com/ea/eg/effective_altruism_and_consequentialism/