The universalizability of effective altruism
The Boston Review’s symposium on effective altriusm is largely more of the usual complaints, and I think Singer’s response has most of it covered. However, there’s one particular strand of argument that I’d like to counter: the argument that effective altruism is in some way not universalizable.
We can see this in Acemoglu’s response:
[A]lthough greater altruistic feeling and behavior should be an unmitigated good, assigning to individuals and groups the roles typically reserved for societal institutions poses some dangers … consider the long-term consequences. When key services we expect from states are taken over by other entities, building trust in the state and developing state capacity in other crucial areas may become harder.1
Or from commenter Ha:
In any given system, an effective altruist will not try to change the system, because it will always be more effective to help others by ‘gaming’ the system. Instead of organizing a political movement, for example, it will always be more effective to work at a bank and donate most of ones income. Yet if everyone were to think this way, there would be nowhere to donate to.
These are clearly universalizability complaints, in that they ask “what if everyone did this?” and deduce bad conseuquences. This is exactly like a common argument against naive consequentialist moral theories - if one should always maximise the beneficial consequences of each act, then one should break one’s promises the moment that more good can be derived by doing so, and if everyone did that then verily the fabric of society would unravel.
This is always going to be a bad argument. Even if you catch someone out with this kind of argument, at best you have shown that they are a bad consequentialist. Consequentialist theories which lead to bad outcomes if everyone follows them are, pretty much by definition, not doing a great job at maximizing good outcomes.2 The less naive consequentialist might realise that the institution of promising is worth committing to even at some (apparent) cost.3 This isn’t a magical dodge, it’s just being clear about what you’re trying to do and then applying a little decision theory and game theory to it.
In particular, there’s some useful concepts from economics that can help us out here. Often, when we are assessing the effects of our actions, we should consider ourselves as marginal actors. That is, we are just one more extra person taking an action. This is importantly different from considering ourself as an average actor. One way to see this is to consider an investment with decreasing marginal (there’s that word again) returns: the first person who invests \$1 gets \$10 back, the next person gets \$9, and so on. If 10 people have invested already, then the average return is \$5. However, if I now make myself the 11th investor, my return will be \$0. The marginal return is much lower than the average, in this case.
Thinking “on the margin” is very important. It can be tempting to assess how much good an organization will do with a dollar by working out how much good they’ve done already, and then dividing it by how much money they’ve spent. But that assumes that they work at a completely linear rate, and ignores any changes in circumstances that might have occurred. The real number could be better or worse: they might have had to spend a lot on overhead initially, and now be converting dollars to interventions at a more efficient rate; or they might be in the middle of overhead spending, in which case your dollar may have little marignal impact at all.4
For us as small donors, this is clearly the way to think about our donations. But things look rather different if we are part of a group. A large group of people who act in similar ways is similar to a single agent with a lot of money. And at that point you stop being a marginal donor.
Consider the Gates Foundation. They have an enormous amount of money, which means that if they decide to fund something, that might make a qualitative difference to how it behaves. It might mean expanding a treatment to a new area, it might mean beginning an entirely new research program. When the Gates Foundatin gives, they give enough that they don’t just get the marginal rate.
Not being a marginal donor means that you expect people to react to your donations. That may not be just the people you are donating to. This is Acemoglu’s nightmare: that should the effective altruism movement grow to such a point, people will turn to it rather than to more sustainable institutions, which will undermine them. Commenter Ha’s worry is dual: that the effective altruism movement will miss out on effective non-marginal opportunties because individual members will keep donating on the margin.
The solution to this problem is just the same as for consequentialism in general. Any sensible strategy for the effective altruism movement will involve changing strategy if it becomes very large and coordinated. If you have a lump of $10m gathered from 100 donors, then the best use of that money will probably not be the same as what each donor would do if they were giving individually. That requires coordination, and effective altruists should be looking for coordination opportunities for precisely this reason. And at that point we will certainly have to think hard about how to take advantage of the opportunities and avoid the pitfalls.
So as long as we think properly about consequences, I don’t think effective altriusm has a universalizability problem. Developing the coordination points to allow us to take advantage of non-marginal opportunities may be tricky, but I think we’ve already made a start with things like the Giving What We Can Trust.
-
As a side note, many of the organizations recommended by GiveWell actually work closely with existing regional institutions. Deworm The World, for example, is largely an advisory body that provides technical assistance to government programs. So I think his fear is illusory, but I shall take it seriously for the rest of this essay. ↩
-
Similarly, Acemoglu at best shows that the current approach to effective altruism is not as effective as it could be, not that “effective altruism” itself is a bad idea. ↩
-
The moral theorist in particular can treat their theory as a coordination point, so they can assume cooperation from other people following the same theory. This makes it particularly easy to argue that a whole community of consequentialists ought to agree that they should have an institution of promising. ↩
-
Although in that case you could consider it to be enhancing the value of future donations ↩