Are people really prosocial? Ask a black box
The evolutionary view that humans are remarkably cooperative compared to our nonhuman relatives has led some scholars to conclude that we are inherently “prosocial,” i.e., that we find it easy to empathize with our fellow humans and to be generous towards them. Support for this idea has come largely from experimental economic games. These usually involve giving research subjects some amount of money and various options about what to do with it – give it to someone else, give it to a community pot, and so on. In many such games, the rational thing to do – if your motivations are selfish rather than prosocial – is simply to keep what you are given. However, it is very common for subjects to give something – often fifty percent or so – rather than nothing. Some scholars have concluded that these patterns can be explained only with resort to biological group selection.
But not all evolutionary scientists are happy with this interpretation of the data from the games. Their skepticism arises from a combination of problems with group selection and questions about how the game data are best interpreted. In a recent article in the Proceedings of the National Academy of Sciences, Maxwell N. Burton-Chellew and Stuart A. West of the University of Oxford attempted to clarify the situation with a series of public goods games. In a public goods game, players are given an initial endowment and can then contribute any portion of it, including none at all, into a common pot. The experimenter multiplies the pot by some number and then divides the resulting pot evenly among the players. Players take home however much they received from the common pot along with however much they kept back of their initial endowment. If the multiplier is less than the number of people in the group, any player will do best if he or she contributes nothing to the pot while the others in his or her group contribute everything. Thus, everyone is tempted to put nothing in the pot. This temptation to free-ride on the efforts of others explains why the public goods game is often used to model the collective action dilemma. If the multiplier is greater than the number of people in the group, in contrast, everyone’s best strategy is to contribute everything to the pot, and, regardless of whether one’s motivations are selfish or prosocial, there is no reason to free ride.
Burton-Chellew and West had their subjects play six different versions of the Public Goods Game, varying not only the multiplier (high or low) but also the amount of information about the game that they provided to the players (none, some, or a lot). All games were played anonymously. Some players were given a standard version of the game in which they knew not only how much they had decided to contribute to the pot and how much they had earned but also that they were playing with other people and how much those people had contributed to the pot. Others were given all that information plus how much the others in their group had earned from each of the game’s twenty rounds, although it is worth noting that players in the “standard” condition could have calculated that information for themselves. Finally, a third of the subjects knew only what they had decided to contribute. They did not even know that they were playing a game or that other subjects were involved. Instead, they were told that they could contribute money to a black box and that it would then give money back out to them, with no information about how the amount given was determined. In fact, even players in the black box condition were playing a public goods game.
Even to people like us who share Burton-Chellew and West’s skepticism about the “prosocial preferences” hypothesis, their results were surprising. In the games with the low multiplier, there was no significant difference overall between the amounts contributed in the games with standard levels of information provided to the participants and in the black box games. That makes it hard to argue that the amounts contributed in the standard information games reflect prosociality. Instead, they may simply be mistakes. Even more surprisingly, when subjects were given more information, they actually contributed less than in either of the two other versions. Again, if their motivations were prosocial, one would have expected the additional information regarding how their contributions helped other members of the group to take home more to have led to even greater contributions.
In the games with a high multiplier, contributions in the black box games were lowest, leveling out at about fifty percent by the fifth of twenty rounds of play. This may reflect the fact that players in that condition were left to their own devices to figure out what might be happening with their money. Average contributions were higher in the other two treatments, but, again, players provided with the most information contributed significantly less than those who were given slightly less information. In neither of these scenarios did average contributions reach 100% even though that would have made the most sense regardless of whether players’ motivations were selfish or prosocial. Lee has played public goods games in his classes with both low and high multipliers and had similar results: Even when players are in the midst of a course about cooperation involving weekly in-class experiments, many of them simply don’t get their minds around the idea that contributing everything to the common pot makes the most sense. In other words, they make mistakes. This is also consistent with previous Public Goods Games studies with high multipliers (e.g., Kümmerli et al. 2012; see also Cronk and Leech 2013).
Advocates of the prosocial preferences hypothesis are not taking this result lying down. In an article published in Trends in Cognitive Science rather than PNAS, behavioral economist Colin F. Camerer of Caltech takes issue with Burton-Chellew and West’s interpretation of their findings. In Camerer’s view, the mistake hypothesis leads to the prediction that players in the games with the high multiplier should contribute similar amounts in all the treatments. In fact, the amount of information available to the players in those games did have an impact on how much they contributed, with players in the black box treatments contributing significantly less than players in the other treatments. Camerer also points out the existence of a variety of existing findings from other studies that, in his view, support the prosocial preferences hypothesis.
For our part, we do not fully understand Camerer’s interpretation of the mistake hypothesis. Specifically, why would that hypothesis predict similar levels of contributions between the black box and other treatments when the multiplier is high? Black box players have to figure out for themselves what is going on, and they have only twenty bewildering rounds in which to do so. Players in the other treatments, on the other hand, are told how the game works. Although some never get their minds around it, many of them do and proceed to contribute 100% of their holdings, resulting in contributions that are higher on average than in the black box treatments. Burton-Chellew and West’s findings are thus worth taking quite seriously.
Burton-Chellew, Maxwell N., and Stuart A. West. 2013. Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences 110(1):216-221.
Camerer, Colin F. 2013. Experimental, cultural and neural evidence of deliberate prosociality. Trends in Cognitive Science 17(3):106-108.
Cronk, Lee, and Beth L. Leech. 2013. Meeting at Grand Central: Understanding the Social and Evolutionary Roots of Cooperation. Princeton, NJ: Princeton University Press.
Kümmerli, Rolf, Maxwell N. Burton-Chellew, Adin Ross-Gillespie, and Stuart A. West. 2010. Resistance to extreme strategies, rather than prosocial preferences, can explain human cooperation in public goods games. Proceedings of the National Academy of Science 107(22):10125-10130.