What would happen if you gave someone money and the opportunity to give some, none, or all of the money to someone who they did not know and would never meet? You might guess that they would simply keep the money, but anyone who has heard of the Dictator Game would argue otherwise.
In a Dictator Game, used by social scientists to assess how cooperative or uncooperative people are, an experimenter gives a subject some amount of money and the opportunity either to keep it or to give some or all of it to a third party. Sometimes the third party is another individual, sometimes it is a charity. Normally, the subject’s identity is kept confidential.
Dictator Games have been played many times in many different settings. The surprising thing about them is that the “dictators” usually do give at least some of their endowment to the third party. Many scholars have interpreted this finding as support for the idea that humans are inherently generous, even toward strangers who will never know who they are and so will never be able to reciprocate.
Others are not so sure. Perhaps the Dictator Game results are simply an artifact of the experimental setting and would not occur in everyday life. To test this idea, two Texas A&M anthropologists, Jeffrey Winking and Nicholas Mizer, traveled to Las Vegas, Nevada. Why Vegas? Winking and Mizer needed a setting in which it would be plausible for someone to receive a sudden windfall but where that windfall was in a form that could be used only locally. Casino chips serve the purpose quite well: They can easily be turned into cash, but only at the casino that issued them.
Winking and Mizer’s method was to play Dictator Games with people at bus stops located near casinos, but without their subjects knowing that they were part of an experiment. First, they located a bus stop with only a single individual waiting. Next, Winking would approach the bus stop, pretend to have received a call on his cell phone, and take a few steps away. Next, Mizer would approach the same bus stop, talking on his cell phone. When he got near the subject, he would pretend to notice twenty dollars worth of chips in his pocket. He would then claim to the subject that he was late for a ride to the airport and ask whether he or she wanted the chips, which he did not have time to cash in. Half of the time, that’s all he said. During the other half, he also said “I don’t know, you can split it with that guy however you want,” thus making sure that the subject was aware of Winking’s presence.
Regardless of whether subjects were reminded about Winking’s existence, they never gave him anything. Not once.
Well, but that’s Vegas, right? Everybody there – and perhaps most of all people waiting at bus stops rather than driving their own cars – is looking to get rich quick. It’s not surprising, then, that everyone kept all the chips Mizer gave them. Right?
Wrong. Winking and Mizer also played traditional Dictator Games with a sample of people waiting at bus stops in Las Vegas. Unlike the first group of subjects, all of these subjects were fully aware from the start that they were taking part in an experiment. Otherwise, conditions were kept the same as much as possible across all the conditions. For example, only chips were used, never cash. If a subject decided to give any of his or her endowment away, the recipient would be a randomly chosen person waiting at a bus stop in Las Vegas, just as before.
In these games, most people – 83.3% - did give chips away to a stranger. The median donation was five dollars. This is similar to what has been seen in many Dictator Game experiments conducted in laboratory settings. The fact that these games were played in Las Vegas did not make much of a difference.
These results provide plenty of reasons to doubt whether laboratory Dictator Game results can be generalized to real-world settings, and plenty of encouragement for scholars to try to conduct more experiments in which the participants are unaware of the experiment itself.
They also show that, at least when scholars as clever as Winking and Mizer are involved, what happens in Vegas can teach us a lot about human behavior.
Winking, J. & Mizer, N. 2013. Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior, http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002
At our recent NESCent meeting on cooperation, we invited all the participants to write guest blogs about their research. The second participant to accept our invitation is Ben Purzycki of the University of British Columbia. Ben’s blog focuses on religious piety, trust, and cooperation in the Tyva Republic, which is located in Siberia.
By Benjamin Grant Purzycki
People in every society face similar challenges when trying to cooperate: Whom can you trust? Who will have your back? How can you avoid getting ripped off? However, while these problems exist in every society, there are also locally specific versions of the same problems. How do people overcome the dilemmas inherent in social living? How do people contend with locally specific versions of social dilemmas?
For a long time, observers have recognized the significant impact religion has on human solidarity (likely from the Latin root ligare, meaning “to bind”, many still view religion as the ligature of human society). While the precise mechanics behind and dynamics between this solidarity and other domains of the human experience have been debated, and while social scientific theories come in and out of fashion, researchers keep returning to the idea that religion is inextricably linked to human sociality, cooperation, and coordination. But how?
One contemporary answer to this question lies in the nature of religious ritual. Many researchers now recognize the significance of the costs of religious ritual as crucial to ensuring that people play nice when they otherwise might not. According to the costly signaling theory of religious ritual, ritual costs convey commitment to traditions and their holders reliably; only people who are seriously committed would bother to sacrifice a goat, pay money to a church, spend a lot of time in supernaturally rationalized organizing in order to please supernatural agents. Central to this line of research is the idea that engaging in rituals manipulates other people into thinking that you’re the real deal: someone engaging in costly commitments is going to have your back when you need it. If this is the case, then, people who engage in such costs ought to be perceived as more trustworthy.
But again, if the “flavor” of challenges to coordination and cooperation change from society to society (and from time to time), and religion can overcome challenges inherent in human sociality, then religious systems ought to evolve toward overcoming locally specific challenges. I’ve been trying to see if this is true in the Tyva Republic of southern Siberia (the same “Tuva” of Richard Feynman and Ralph Leighton fame).
Many rural Tyvans are pastoralists who move up to four times per year to specific locations. While the land tenure system and resource distribution has changed radically over the years (e.g., the Qing Dynasty’s imposed district system, the Buddhist lama tribute system, the rise and fall of the Soviet system, and so forth), and Soviets engaged in the suppression of religion, there remains a curious persistence of local animist-totemic expression of ritualized cairn piety.
One locally specific challenge to human coordination and cooperation is one that faces pastoralists around the world: competition over access to pasture and livestock. So, the question becomes, how can people minimize the deleterious effects of the conflicts that stem from such competition? Again, if religion softens the blows of conflict brought out by being a social animal, has the Inner Asian religious system succeeded in doing this and, if so, how?
Many Tyvans engage in a cairn piety found throughout Inner and Central Asia. The collective rites at these cairns are conducted seasonally (typically in spring, right before pastures start becoming green again, and when many new livestock are born, i.e., right after Siberian winters). The more individualized rites are conducted sporadically during travel. So, if you’re travelling to another pasture if your current one is getting taxed, you might pass by a cairn on a mountaintop where you stop, make an offering of money, silk, incense, tobacco, and/or a clutch of hairs from the tail of your horse.
Image: Cairn (ovaa) in central Tyva (2009)
These cairns are not simply places to mark trails (I really doubt that pastoralists need any help with navigation), but they are quite typically placed on borders of other peoples’ territory. Cairns (ovaalar) are devoted to local spirit-masters (cher eeleri) who are acutely concerned with ritual behavior and the maintenance of natural resources (e.g., they don’t like it if you sully a river or hunt too many deer). Interestingly, even though Tyvans do not explicitly state that spirits are concerned with things like theft, murder, being generous, kind, or trustworthy, such spirits do appear to tap into such domains upon closer scrutiny.
To recap, we have the socioecological problem of negotiating pastureland and keeping herds safe from unsavory others. We have ritual cairns located on the borders of herders’ territory, and these cairns are associated with supernatural agents who really appreciate it when you make them offerings upon passing cairns devoted to them. Tellingly, the archaeology of the cairn system suggests that it co-evolved with the development of pastoralism in the region. So, does participation in cairn rituals somehow indicate that someone is worth trusting?
Inspired by such questions, Tayana Arakchaa and I sought to find out whether or not this was the case.
We conducted a simple between-participants study to see if Tyvans would trust a hypothetical cairn practitioner more than a) ethnic Tyvans who do not engage in such practices, b) ethnic Tyvans who have converted to Christianity and do not engage in such practices, and c) Christian ethnic Russians who do not engage in such practices. As it turns out, they do. Tyvans who regularly stop at ritual cairns and pay their respects to resident spirits are perceived as more honest and more likely to return borrowed money or participants’ lost purses or wallets. Participants were significantly more likely to say they would lend money to the regular cairn practitioner. Moreover, participants were even more likely to trust this hypothetical individual to babysit their children!
Recall that even though spirit-masters aren’t explicitly concerned with things such as trust or reliability, their knowledge base and concerns gravitate toward them under controlled conditions. This suggests that supernatural agents—even when cultural consensus suggests they aren’t concerned with the things that the Abrahamic god cares about—still prime psychological “moral” systems. Spirit-masters are concerned with ritual behaviors at designated cairns placed on territorial borders. According to our study, engaging in such rites conveys a sense of trustworthiness above and beyond individuals who don’t for various reasons (or no reason at all). People are thought of as more reliable if they regularly engage in such practices.
The study tested predictions generated by solidarity theories of religious ritual, but was limited to situating what might be human psychological universals within a local socioecological context. As always, many questions remain. In my view, there hasn’t been a better—or more crucial—time to be asking them.
At our recent NESCent meeting on cooperation, we invited all the participants to write guest blogs about their research. The first participant to accept our invitation is Shane Macfarlan of the University of Missouri. Together with his collaborator Mark Remiker of the Oregon Rural Practice-Based Research Network, Macfarlan wrote the following blog about reputations and cooperation on the Caribbean island of Dominica. Enjoy!
by Shane J. Macfarlan and Mark Remiker
Most people come to understand reputations through their experience of either having a good or bad one – just think about the positive effects of a good credit score or the negative effects that comes with a “bad” reputation in high school. However, for many scientists, reputations represent a fascinating area of inquiry. Reputations likely play a critical role in the evolution of human cooperation and our capacity for language. They are essential for Internet transactions (just think eBay), can boost resource conservation, and even have been linked to mental and physical health in children and adults. Reputations are so central to human functioning that entire businesses have been created with the sole purpose of managing other peoples’ and companies’ reputations (e.g. www.reputation.com). Despite the importance of reputations, little empirical evidence exists about the behavioral processes that lead to a person’s reputation. Even less is known about reputation dynamics. For example, once a person acquires a reputation, is it possible to change it? If so, what are the processes that drive reputation change?
Biologists, economists, and psychologists have been interested in these questions for some time; however, most research on the topic either has involved mathematical models or laboratory experiments. Although useful in their own right, both approaches have shortcomings: mathematical models tend to be highly simplified and include assumptions about reality that may not be justified, while laboratory experiments often involve total strangers interacting with one another in settings that do not resemble natural human contexts. This is where a trained anthropologist can be really useful. For example, modelers and experimentalists have assumed that reputations for cooperation are related to the number of cooperative acts an individual performs – the more acts of generosity a person performs, the better their reputation. Fairly straightforward, right? Well, imagine the following scenario: two people each have ten acts of generosity they can bestow upon others. Individual one provides all ten acts of generosity upon one person, while individual two allocates one act of generosity upon ten unique people. Which one should have the better reputation? If one act of generosity is just as good as another (as the modelers and experimentalists believe), then both should end up with the same reputation. This interpretation didn’t jive with my colleagues and my understanding of how reputations form, which is based on gossip and the social transmission of information. So we set out to uncover the link between cooperative behavior and reputations in a naturalistic human setting – a rural, Afro-Caribbean community on the island of Dominica.
Dominica, which is not to be confused with the Dominican Republic, is located in the Lesser Antilles between Guadeloupe and Martinique. The village of Bwa Mawego (a pseudonym we use to protect the anonymity of the residents) is one of the most remote communities on the island. The population consists of approximately 400 people, most of who are engaged in small-scale farming. The primary crop people grow is the bay laurel (Pimenta racemosa), known locally simply as the bay tree. Farmers harvest the leaves from bay trees and steam-distill them to produce an essential oil that is sold on the international commodities market as an ingredient in the cosmetics industry (usually as an aftershave lotion for men). Because the work is so difficult, individual farmers require assistance from other community members when they distill bay oil. Previous research we performed demonstrated that men with reputations for being really cooperative received more assistance when they distilled bay oil and people with bad reputations received very little assistance. This was a perfect context for examining the link between cooperation and reputations. For ten months we followed 53 men who were engaged in bay oil distillation and recorded the number of people they helped and the number of acts of cooperation they performed. Next, we asked community members to rate the 53 men’s reputations for cooperativeness. Once this was completed, we followed the same men for another ten-month labor period and then had community members rate the cooperativeness of the men a second time. The results were quite interesting. Men who helped a large number of people, not those who performed the most cooperative acts, had the best reputations. This finding suggests that providing ten acts of generosity towards a single person is not equivalent to providing one act of generosity for ten people. Helping more people is the key to a better reputation. Additionally, young men had better reputations compared to older males. Generally, a man’s reputation persisted over time – if a person was considered cooperative in the first ten-month period, community members rated him as cooperative in the second ten-month period, irrespective of his behavior. However, community members did change their mind about some men’s cooperativeness. If a guy increased the number of people he helped between the two time periods, community members thought more highly of him.
There are lessons to be learned from these findings: 1) if you want a good reputation, help as many different people as possible; 2) reputations are sticky – once you get one, they are hard to shake; and 3) if you get a bad reputation, all hope is not lost; people may be willing to change their mind about you, but you have to work hard to change their minds. To learn more about this research click on the following link: http://rspb.royalsocietypublishing.org/content/280/1761/20130557.abstract?sid=087a9384-e18d-4a01-8dda-c2c6baa83c0a
How could we better understand human cooperation? What do scholars in other disciplines know that we may have overlooked? At a meeting earlier this month at the National Evolutionary Synthesis Center (NESCent) we and other scholars from around the world worked to answer these questions. NESCent’s mission is to foster cross-disciplinary research on evolution and is jointly operated by Duke University, The University of North Carolina at Chapel Hill, and North Carolina State University. It is sponsored by the National Science Foundation. As a Synthesis Center, NESCent is particularly interested in promoting “the synthesis of information, concepts and knowledge to address significant, emerging, or novel questions in evolutionary science and its applications” (NESCent.org).
One of the ways in which NESCent pursues its mission is by funding “Catalysis Meetings.” Catalysis Meetings typically involve about thirty scholars and are focused on major questions or research areas related to evolutionary biology. The idea of a Catalysis Meeting is clearly indicated by the name: They are meant to catalyze change in the evolutionary sciences by bringing leading scholars on a specific topic together for a few days to focus on an area of common interest.
Inspired by the experience of writing our new book Meeting at Grand Central: Understanding the Evolutionary and Social Roots of Cooperation (Princeton, 2013), we applied for and received permission to run a Catalysis Meeting on “Synthesizing the Evolutionary and Social Science Approaches to Human Cooperation.” The meeting was held April 4-7, 2013, at NESCent’s offices in Durham.
Including ourselves, there were thirty-two scholars in attendance. Our invitees represented a variety of disciplines (anthropology, political science, psychology, and biology), geographical locations (US, Japan, Hungary, Denmark, and Canada), topical interests, and career stages. We asked twelve mostly junior participants to give presentations; we cast more senior scholars as discussants. Here is a list of the papers presented at the meeting:
“We are hardwired not to be hardwired,” by Darren Schreiber (Central European University, Hungary); Discussant: Daniel Hruschka (Arizona State)
“Marking the edge of many circles: Networks, institutions, and the emergence of cooperation in groups,” by Drew Gerkey (Oregon State); Discussant: Beth L. Leech (Rutgers)
“Egalitarianism facilitates cooperation in both institutional and non-institutional environments,” by Chris Dawes (New York University); Discussant: Brian Hare (Duke)
“Cooperative reputations, labor exchange, and mutual insurance in a Dominican community,” by Shane MacFarlan (Missouri); Discussant: Richard Sosis (Connecticut)
“Innate, individual differences in the emotional response to political contention,” by Jaime Settle (William & Mary); Discussant: Masanori Takezawa (Hokkaido University, Japan)
“Adaptations for social exchange and social welfare institutions,” by Michael Bang Petersen (Aarhus University, Denmark); Discussant: David Lowery (Penn State)
“Showing you care: The social context of household contributions to charity,” by Wesley Allen-Arave (New Mexico); Discussant: Virginia Gray (University of North Carolina – Chapel Hill)
“The side-taking function of morality,” by Peter DeScioli (Harvard); Discussant: John Hibbing (Nebraska)
“The behavioral ecology of religious leadership: Charisma, status, and cooperation in Candomblé,” by Montserrat Soler (University of California – Santa Barbara); Discussant: Ben Purzycki (British Columbia)
“Gossip, reputation, and resource allocation,” by Nicole Hess (Washington State – Vancouver); Discussant: Patricia Hawley (Kansas)
“Generosity or reciprocity: Resource transfers and risk pooling,” by C. Athena Aktipis (Arizona State); Discussant: Adrian Jaeggi (University of California – Santa Barbara)
“Cooperation in a complex adaptive system: The case of mobile pastoralists in the far north region of Cameroon,” by Mark Moritz (Ohio State); Discussant: Jacopo Baggio (Arizona State)
In addition to the presentations, we also organized breakout sessions for working groups on six more specific topics within the overall theme:
Religion and cooperation: Lee Cronk (Rutgers), Rolando de Aguiar (Rutgers), Ben Purzycki (British Columbia), Montserrat Soler (University of California – Santa Barbara), and Richard Sosis (Connecticut)
Risk pooling and food sharing: C. Athena Aktipis (Arizona State), Wesley Allen-Arave (New Mexico), Drew Gerkey (Oregon State), Matthew Gervais (University of California – Los Angeles), Padmini Iyer (Rutgers), Adrian Jaeggi (University of California – Santa Barbara)
Coalitions and agenda-setting: Mark Flinn (Missouri), Virginia Gray (University of North Carolina – Chapel Hill), Beth L. Leech (Rutgers), David Lowery (Penn State), Michael Bang Petersen (Aarhus), and Darren Schreiber (Central European University)
Emergence and power laws: Jacopo Baggio (Arizona State), Frank Baumgartner (University of California – Chapel Hill), Daniel Hruschka (Arizona State), and Mark Moritz (Ohio State)
Morality, reputations, and cooperative partner choice: Peter DeScioli (Harvard), Nicole Hess (Washington State – Vancouver), John Hibbing (Nebraska), Shane MacFarlan (Missouri), and Masanori Takezawa (Hokkaido)
Cognitive and biological underpinnings of institutions: Chris Dawes (NYU), Carly Jacobs (Nebraska), Brian Hare (Duke), Patricia Hawley (Kansas), Jaime Settle (William & Mary), and Jingzhi Tan (Duke)
The meeting was a wonderful experience for us, and, we hope, for our participants, as well. Possible outcomes of the meeting include new collaborations, applications for future Catalysis Meetings on more specific topics, and general sense of excitement regarding the prospect of studying human cooperation in ways that combine insights from both the evolutionary and social sciences.
Acknowledgements: We would like to thank not only our participants but also NESCent for sponsoring the meeting. NESCent’s Logistics Manager Danielle Wilson was particularly helpful in making sure that the meeting came off without a hitch. We would also like to thank Peter Ungar, Clifton Ragsdale, Alan Bergland, and John Logsdon for sharing with us the proposals and agendas they prepared for their own NESCent Catalysis Meetings.
A common theme in the evolutionary and economic literatures on human cooperation is the idea that we routinely engage in altruistic punishment, i.e., the infliction of harm on those who have harmed others at some cost to ourselves (e.g., Fehr and Gächter 2002, Boyd et al. 2003). According to this view, “people have an intrinsic motivation to punish shirkers” that stems from something beyond a desire to protect or increase their own benefits (Bowles and Gintis 2011:25). If true, then humans would have a decided advantage over other species in overcoming the collective action dilemma because it would be easy to find people willing to punish free riders. This, in turn, would help explain why humans engage in so much more cooperation than do most vertebrates.
Advocates of this idea do not provide many good examples of it happening in the real world. Here is one attempt to do so: “Think of queuing as an instructive example. Telling a queue jumper to stand in line is probably (psychologically) costly for the person confronting the queue jumper. If the queue jumper gets back into line, all people who were put at a disadvantage by the queue jumper benefit” (Gächter 2007:30). As people with some personal experience chastising queue jumpers, we find this example wanting: the psychological cost is minor and often offset by a change in the queue-jumper’s behavior. Furthermore, if we did evolve to be altruistic punishers, shouldn’t the act of confronting a queue-jumper be psychologically rewarding rather than costly? We also note that it is exceedingly rare for people to chastise – or even notice – queue-jumpers who break into line behind them.
The best evidence for altruistic punishment comes from laboratory experiments, but that has also come under fire. Here we focus on a recent article in Proceedings of the Royal Society B – Biological Sciences by Eric J. Pedersen, Robert Kurzban, and Michael E. McCullough.
A common source of support for the existence of altruistic punishment is the Third Party Punishment Game, which is an extension of the Dictator Game. In a Dictator Game, one individual is given money or some other resource along with the opportunity to share it with someone else. Anything not shared remains with the “dictator.” In the Third Party Punishment Game, someone witnessing the Dictator Game is given money that he or she can spend in order to punish the dictator. Again, anything not spent is retained by the punisher. Third Party Punishment Games are often run using what is called the “strategy method.” This means that the punisher is not asked how much he or she is willing to spend to punish the dictator after being informed about the dictator’s allocation decision. Rather, the punisher is asked to speculate regarding what he or she would do if the dictator were to make a variety of different allocations.
Pedersen et al. identify a variety of problems with the use of Third Party Punishment Games as support for the theory of altruistic punishment. For example, people in experimental games routinely make mistakes, and because the only way that a punisher can make a mistake is to punish, any errors will be mistaken for willful punishment (mistakes by subjects in experimental games were the subject of a previous post on this blog: http://meetingatgrandcentral.com/2013/03/06/are-people-really-prosocial-ask-a-black-box/). Another problem is created by the strategy method: People may be bad at forecasting their own behaviors. The altruistic punishment thesis also relies on the idea that people feel angry toward those who behave selfishly. Pedersen et al. argue that it is important to distinguish anger from closely related emotions such as envy.
To help clarify the situation, Pedersen et al. ran three experiments. The first was a fairly standard Third Party Punishment Game but without the strategy method. Pedersen et al. did add a couple of interesting twists. First, they combined the Dictator Game with what is sometimes called the Taking Game (Bardsley 2008): Dictators had the option not only of giving money to someone else but also of taking money from that person. Second, they gave those in the “punisher” role the opportunity to reward as well as punish, thus creating the opportunity for errors to go both ways. The college student subjects of this experiment did punish those who took money from them, but they did not punish dictators who took money from someone else. In other words, they engaged in second-party but not third-party punishment. This is similar to a study that found second-party but not third-party punishment among the Hadza of Tanzania, one of the world’s few remaining hunting and gathering peoples (Marlowe 2009). Pedersen et al. also asked subjects to rate their emotional responses toward the other players, finding that punishers were motivated more by envy at the dictator’s selfish gains—a motivation that is itself selfish—rather than moralistic anger.
To explore the possibility that people are bad forecasters of their own behavior, Pedersen et al. ran a second set of experiments, first with subjects recruited online through Amazon Mechanical Turk and second with college students. In both of these experiments, subjects read a description of a Third Party Punishment experiment and then were asked to respond to a series of questions about how they would feel and act in that situation. They did not actually play a game. As you might expect, people predicted that they would engage in punishment far more often than the subjects in the first experiment actually did.
Pedersen et al. conclude that “the case for altruistic punishment in humans . . . has been overstated.” An understatement if ever there was one.
Bardsley, Nicholas. 2008. Dictator game giving: altruism or artefact? Experimental Economics 11(2):122-133.
Bowles, Samuel, and Herbet Gintis. 2011. A Cooperative Species: Human Reciprocity and Its Evolution. Princeton: Princeton University Press.
Boyd, Robert, Herbert Gintis, Samuel Bowels, and Peter J. Richerson. 2003. The evolution of altruistic punishment. Proceedings of the National Academy of Sciences 100(6):3531-3535.
Fehr, Ernst, Gächter, Simon. 2002. Altruistic punishment in humans. Nature 415, 137–140. (doi:10.1038/ 415137a)
Gächter, Simon. 2007. Altruistic punishment. In Encyclopedia of Social Psychology, Roy F. Baumeister & Kathleen D. Vohs, eds., pp. 30-31. Sage.
Marlowe, Frank. 2009. Hadza cooperation: Second-party punishment, yes; third-party punishment, no. Human Nature 20:417–430.
Pedersen Eric J., Robert Kurzban, and Michael E. McCullough. 2013. Do humans really punish altruistically? A closer look. Proc R Soc B 280: 20122723. http://dx.doi.org/10.1098/rspb.2012.2723
The evolutionary view that humans are remarkably cooperative compared to our nonhuman relatives has led some scholars to conclude that we are inherently “prosocial,” i.e., that we find it easy to empathize with our fellow humans and to be generous towards them. Support for this idea has come largely from experimental economic games. These usually involve giving research subjects some amount of money and various options about what to do with it – give it to someone else, give it to a community pot, and so on. In many such games, the rational thing to do – if your motivations are selfish rather than prosocial – is simply to keep what you are given. However, it is very common for subjects to give something – often fifty percent or so – rather than nothing. Some scholars have concluded that these patterns can be explained only with resort to biological group selection.
But not all evolutionary scientists are happy with this interpretation of the data from the games. Their skepticism arises from a combination of problems with group selection and questions about how the game data are best interpreted. In a recent article in the Proceedings of the National Academy of Sciences, Maxwell N. Burton-Chellew and Stuart A. West of the University of Oxford attempted to clarify the situation with a series of public goods games. In a public goods game, players are given an initial endowment and can then contribute any portion of it, including none at all, into a common pot. The experimenter multiplies the pot by some number and then divides the resulting pot evenly among the players. Players take home however much they received from the common pot along with however much they kept back of their initial endowment. If the multiplier is less than the number of people in the group, any player will do best if he or she contributes nothing to the pot while the others in his or her group contribute everything. Thus, everyone is tempted to put nothing in the pot. This temptation to free-ride on the efforts of others explains why the public goods game is often used to model the collective action dilemma. If the multiplier is greater than the number of people in the group, in contrast, everyone’s best strategy is to contribute everything to the pot, and, regardless of whether one’s motivations are selfish or prosocial, there is no reason to free ride.
Burton-Chellew and West had their subjects play six different versions of the Public Goods Game, varying not only the multiplier (high or low) but also the amount of information about the game that they provided to the players (none, some, or a lot). All games were played anonymously. Some players were given a standard version of the game in which they knew not only how much they had decided to contribute to the pot and how much they had earned but also that they were playing with other people and how much those people had contributed to the pot. Others were given all that information plus how much the others in their group had earned from each of the game’s twenty rounds, although it is worth noting that players in the “standard” condition could have calculated that information for themselves. Finally, a third of the subjects knew only what they had decided to contribute. They did not even know that they were playing a game or that other subjects were involved. Instead, they were told that they could contribute money to a black box and that it would then give money back out to them, with no information about how the amount given was determined. In fact, even players in the black box condition were playing a public goods game.
Even to people like us who share Burton-Chellew and West’s skepticism about the “prosocial preferences” hypothesis, their results were surprising. In the games with the low multiplier, there was no significant difference overall between the amounts contributed in the games with standard levels of information provided to the participants and in the black box games. That makes it hard to argue that the amounts contributed in the standard information games reflect prosociality. Instead, they may simply be mistakes. Even more surprisingly, when subjects were given more information, they actually contributed less than in either of the two other versions. Again, if their motivations were prosocial, one would have expected the additional information regarding how their contributions helped other members of the group to take home more to have led to even greater contributions.
In the games with a high multiplier, contributions in the black box games were lowest, leveling out at about fifty percent by the fifth of twenty rounds of play. This may reflect the fact that players in that condition were left to their own devices to figure out what might be happening with their money. Average contributions were higher in the other two treatments, but, again, players provided with the most information contributed significantly less than those who were given slightly less information. In neither of these scenarios did average contributions reach 100% even though that would have made the most sense regardless of whether players’ motivations were selfish or prosocial. Lee has played public goods games in his classes with both low and high multipliers and had similar results: Even when players are in the midst of a course about cooperation involving weekly in-class experiments, many of them simply don’t get their minds around the idea that contributing everything to the common pot makes the most sense. In other words, they make mistakes. This is also consistent with previous Public Goods Games studies with high multipliers (e.g., Kümmerli et al. 2012; see also Cronk and Leech 2013).
Advocates of the prosocial preferences hypothesis are not taking this result lying down. In an article published in Trends in Cognitive Science rather than PNAS, behavioral economist Colin F. Camerer of Caltech takes issue with Burton-Chellew and West’s interpretation of their findings. In Camerer’s view, the mistake hypothesis leads to the prediction that players in the games with the high multiplier should contribute similar amounts in all the treatments. In fact, the amount of information available to the players in those games did have an impact on how much they contributed, with players in the black box treatments contributing significantly less than players in the other treatments. Camerer also points out the existence of a variety of existing findings from other studies that, in his view, support the prosocial preferences hypothesis.
For our part, we do not fully understand Camerer’s interpretation of the mistake hypothesis. Specifically, why would that hypothesis predict similar levels of contributions between the black box and other treatments when the multiplier is high? Black box players have to figure out for themselves what is going on, and they have only twenty bewildering rounds in which to do so. Players in the other treatments, on the other hand, are told how the game works. Although some never get their minds around it, many of them do and proceed to contribute 100% of their holdings, resulting in contributions that are higher on average than in the black box treatments. Burton-Chellew and West’s findings are thus worth taking quite seriously.
Burton-Chellew, Maxwell N., and Stuart A. West. 2013. Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences 110(1):216-221.
Camerer, Colin F. 2013. Experimental, cultural and neural evidence of deliberate prosociality. Trends in Cognitive Science 17(3):106-108.
Cronk, Lee, and Beth L. Leech. 2013. Meeting at Grand Central: Understanding the Social and Evolutionary Roots of Cooperation. Princeton, NJ: Princeton University Press.
Kümmerli, Rolf, Maxwell N. Burton-Chellew, Adin Ross-Gillespie, and Stuart A. West. 2010. Resistance to extreme strategies, rather than prosocial preferences, can explain human cooperation in public goods games. Proceedings of the National Academy of Science 107(22):10125-10130.