Open access peer-reviewed chapter

Coordination Concerns: Concealing the Free Rider Problem

Written By

Adriana Alventosa and Penélope Hernández

Submitted: 19 April 2018 Reviewed: 25 May 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.79126

From the Edited Volume

Game Theory - Applications in Logistics and Economy

Edited by Danijela Tuljak-Suban

Chapter metrics overview

1,072 Chapter Downloads

View Full Metrics

Abstract

In our daily routine, we face many situations where we need to coordinate in cooperating, such as maintaining friendships or team working. If we all put all our efforts in doing so, we will efficiently get the best possible outcome for everybody. However, we are occasionally tempted to look the other way and let the rest solve the problem. These situations are called social dilemmas and this passive attitude of not exerting any effort in the common goal and benefitting of others doing so is denominated the free rider problem. The purpose of this chapter is to present this issue by means of a public goods game and propose the different mechanisms experimental literature has demonstrated to conceal the free rider problem. Examples of this nature are maintaining relationships over prolonged periods of time, determining a minimum threshold for the common project rewards to be enjoyed by everybody or enabling transparent communication. Additionally, sanctioning opportunities promote cooperation even further. Besides economic penalizations, punishment can occur at a social domain through hostility, ostracism or bond breaking. Moreover, it can be implemented either from an individual approach or through the use of centralized institutions endowed with sanctioning power.

Keywords

  • public goods game
  • coordination
  • free riding
  • heterogeneity
  • social preferences
  • punishment

1. Introduction

Coordination is a key element in most of our day-to-day interactions with other individuals. Think about the chief executive officer (CEO) who has to bring together different working units, the managers at each of those units organizing their teams or the workers in each of those teams trying to work together with a common goal. But coordination is not only crucial during working hours; think about reaching an agreement at your neighborhood community about setting up a new elevator, renewing the contract to the maintenance staff or modernizing the almost torn-down façade. Remember having tried to organize a meeting with your classmates to reminisce and catch up or just think about how instruments synchronize in a sonata.

Regardless of it being a working or a social environment, coordination is pursued as a guarantee of efficiency: maximizing utility using the minimum resources for it. Recall the firm example, for instance. Any profit-oriented firm will try to get the most out of its profits with the least assets and productive factors possible, where time is one of the most valuable assets in a competitive context in which it is standard to see how rivals sprint to be original and inventive. In this setting, coordinated teams will work faster and will avoid duplication and shortages, common in teams with a lack of organization. Think about going to a restaurant and receiving your drinks twice or not receiving them at all.

From a social perspective, if neighbors propose the modernization of that façade, it may be moved by their esthete self but there is probably also a component of wanting to appreciate their property. Evidently, the upgrading should imply the minimum cost that indeed causes the expected revalue.

In the field of game theory, the public goods game (PGG) has been the baseline to reproduce any of the situations previously described. This simple game, to be explained in the following section, clearly captures the importance of coordinated actions in terms of efficiency along with the associated issues that coordination raises: if coordination is so beneficial, why is it sometimes difficult to achieve? Intuitively, coordination is costly. Coordination requires effort, time and resources. And more importantly, given its social benefits, you cannot avoid that somebody that does not put in those ingredients takes advantage of the outcomes. Recall the elevator example formerly presented: you cannot prevent a neighbor who has not paid for the installation from using the elevator. The fact that coordination is costly and that its outcomes are non-excludable tempts selfish individuals to free ride from coordinating. Either way, they are going to take the elevator.

In the following section, we formally describe the PGG and the theoretical predictions for it, illustrating the free rider issue. Furthermore, we present a comparative statics analysis using recent scientific findings regarding the different elements that define a PGG. After that, we describe the four main mechanisms the literature has proposed to face the free riding matter. From these, we select sanctioning as the one with most potential and devote the last section to the detailed description of the different types of punishment schemes that can be implemented.

Advertisement

2. Coordination issue

The PGG, in its simplest form, is a 2 × 2 game where two players must simultaneously decide whether to contribute or not to contribute to a public good. The best outcome, where both players receive the highest payoffs, is reached if both of them contribute. However, if a player believes the other one is going to contribute, then he receives a higher payoff by not contributing, given it is a costly action, and the public good is going to be funded, thanks to the other player’s contribution. Finally, if none of them contribute, the public good’s costs are not covered. In a normal form, a PGG would look like Table 1, where the row-player is Player 1, the column-player is Player 2 and contribute (C) and not contribute (NC) are the possible actions for each player. Each payoff cell contains the payoff for Player 1, the payoff for Player 2 for every possible combination of actions. Notice that when both players coordinate in contributing, they both receive a payoff of 2. However, if one of them contributes and the other one does not, the player who has contributed bears all the costs and is left with a payoff of 0, whereas the player who has free ridden by not contributing benefits from the public good without engaging in the costly action, that is, he receives a payoff of 3. Finally, if none of them contribute, they both receive 1, which is a worse outcome than both of them contributing and earning 2. Both players have perfect information about the payoffs for each possible scenario.

Table 1.

Classic public goods game.

In game theory, the standard solution concept of a simultaneous game with perfect information is the Nash Equilibrium (NE), named after John Forbes Nash Jr. It states that a pair of actions is an NE if no player has a profitable unilateral deviation from it. Assuming both players are rational and have selfish preferences, that is, they maximize their material payoff, the NE of this game is that both players choose not to contribute (NC, NC) receiving a payoff of 1 each. One could think that the solution of this game is that both players contribute to the public good, as they are both better off than not contributing (2 > 1). However, notice that if any player believes the other player is going to contribute, they have incentives to free ride by not contributing and make a payoff of 3 instead of 2. Therefore, (C,C) cannot be an NE. However, if both players are intelligent, they can both apply this reasoning, such that they both end up in NC, NC with a payoff of 1 each. This pair of actions is an NE because no player has incentives to deviate to contribution and make a payoff of 0.

We can generalize this simple game to a broader situation that can be applied to, for instance, a firm facing this social problem. Let us consider a group of n workers who receive an endowment in effort. From this endowment, they must decide how much effort to keep for their own interest and how much to destine to the team. The sum of all of the efforts the workers invest in the firm project is then multiplied by a multiplier and equally divided among all the workers, regardless of their contribution. The material payoffs of any player would be given by Eq. (1), where ω is the endowment in effort every subject receives, gi is the individual contribution of subject i, and α is the marginal per capita return of the project.

π i = ω g i + α i = 1 n g i E1

The NE in this game is that every individual chooses gi = 0, even though the social optimum is reached if gi = ω. For this to be true, the marginal per capita return (MPCR) must be α < 1.

Consider, for instance, a task where a working team of three salesmen must work jointly to reach a particular goal in sales. Every salesman will get an increase in their salary, proportional to the aggregate sales achieved by the group, which makes the goal beneficial for all of them. However, achieving that goal implies a substantial level of effort. The social optimum would be reached if all of them cooperated to achieve that task, attaining the maximum salary increase possible. However, they are benefitting from the aggregate sales, that is, from the same social project. Thus, any of them could decide to free ride on effort and take advantage of the sales the other two members achieve. However, if the three of them think in the same way, nobody would devote any effort and there would be no salary increase.

Despite these predictions, experimental evidence has repeatedly and extensively proven deviations from the NE. In particular, positive levels of cooperation to public goods are usually achieved. Subjects contribute with some amount between 40 and 60% of their endowment.

In the following, and before moving on to how the coordination issue can be concealed, let us see what recent experimental evidence has proven about variations in the presented setup. There are four elements that we consider homogenous for all of the subjects in this standard game: endowment, MPCR, information, and preferences. In other words, we consider that all subjects have the same effort capabilities, that the return of the public good is common from everybody, that all of them are provided the same information about the project and that they all want to maximize their material payoff. In this section, we aim to see how the violation of these homogeneity assumptions changes the outcome of the game in an experimental setting.

2.1. Wealth heterogeneity

Wealth homogeneity is one of the first assumptions that raises suspicions. Considering equally rich societies is an unrealistic and rather utopian assumption. If we accept that we have different levels of wealth but that we are able to group ourselves into homogeneous groups, there are significant differences in what low-endowment and high-endowment groups contribute to public goods. In particular, low-income groups tend to over-contribute while high-income groups under-contribute. In other words, the proportion of their endowment that poorer groups destine to public goods is higher than that of richer groups [1, 2, 3, 4]. Additionally, as it has been proven in [5], this result is robust to the origin of the endowment. In other words, it does not matter whether subjects have had to work for that endowment and it is in fact an income or whether it has just been given to them effortlessly as wealth. In either case, those whose endowment is lower contribute to a larger extent than those whose endowment is higher.

Nevertheless, one could argue that we usually face coordination issues in heterogeneous groups. In other words, we sometimes are not able to classify ourselves into low- and high-wealth groups and we, in fact, belong to unequal communities. In this case, heterogeneous groups contribute less than homogeneous groups [5]. This accounts for an inequality issue, where having variety could be detrimental for group performance.

These results highlight the importance of working in homogeneous groups. Going back to the salesman example, perhaps not all of them have the same time availability or the same effort capability. If we take these features as given, the unit manager should try and form homogeneous groups according to the workers’ characteristics in order to maximize the total sales.

2.2. Productivity heterogeneity

The second dubious assumption is the fact that the public good’s productivity, captured by the MPCR, is common for everybody. This implies that everybody values the public good in the same way and, therefore, obtains the same return from it. If we consider a neighborhood community discussing about the elevator, it is comprehensible that the return that somebody living on the first floor receives from having an elevator is not the same as the return of somebody living on the last floor. If instead, we consider a working team incentivized with a salary increase, some of them could argue that devoting that extra effort is too time-consuming and that they prefer to spend that time with their families rather than earning more money. Furthermore, personal circumstances affect our daily attitude, concentration and productivity at work, and they do not affect all of us equally.

In this line, literature has demonstrated that when endowed with different productivity levels, low-MPCR subjects contribute less than high-MPCR subjects. This holds even in heterogeneous groups with different productivity levels [6]. However, heterogeneous groups contribute less than homogeneous ones, analogously to the case of wealth heterogeneity. Moreover, as [7] stresses, these lower contributions are not a consequence of the heterogeneity itself but of the nature of such asymmetry.

This implies that teams should share interests, motivations, and goals. Likewise, different team performance-related bonuses should be avoided among identical workers. This way, coordination will be higher and so will efficiency.

Finally, let us make a brief comment about productivity related to group size. The MPCR, which we saw in the model as α, is, in fact, the result of a multiplier representing each individual’s valuation of the public good divided by the number of individuals among which the public good is going to be shared. Possibly, what we expect is that increasing group size reduces cooperation given that it requires a higher degree of coordination. As shown in [8], if the larger group size entails a decline of the MPCR, the effect will indeed be negative on cooperation. Nonetheless, for the same MPCR, large groups contribute more, on average, to public goods than smaller ones [9, 10], despite the potential coordination issues. This positive effect is called group size effect and rules out the common belief of small groups being superior.

An example of this could be a logistics manager coordinating different working divisions of a supply chain. Most of the times, firms commit to handing in the final product before a particular date. In this case, where coordination is fundamental to meet such a deadline, the logistics manager should not be afraid of working with large groups in each of the chain links, as long as their productivity is similar. They will coordinate more to complete their part of the process such that the customer has his product at the right time.

2.3. Information

The classic PGG and its solution concept assume that information is perfect and symmetric. In other words, everybody has costless access to the details about the game’s evolution and outcomes and this information is the same for everybody. In our daily interactions, however, we sometimes fall short of information.

An aspect in which literature has focused on during the last decades is the feedback subjects receive after playing the PGG and how they receive such information. In this line, knowing what peers have contributed increases contributions in future rounds, an effect that is detrimental if instead of knowing the level of contributions, they are informed about the earnings [11, 12, 13]. Other factors that have turned up to increase contributions are providing feedback about virtuous behavior in the group, that is, the higher-group contributions [14] or identity revealing [15].

Behavioral economics has also spotted that the way in which information is disclosed also causes a significant impact on decision making. This result is called the framing effect and is widely used in behavioral economics, especially for marketing and public policy purposes. Regarding this, [16] carries out a meta-analysis of framing effects on PGG. This study claims that if the PGG is played in phases of several rounds each after which they receive results summary, a re-start effect is triggered increasing the contributions at the beginning of the next phase. Moreover, subjects contribute more when the public good payoffs are presented in terms of gifts instead of private and public investments. Finally, comprehension tasks significantly enhance cooperation.

Information is, therefore, a potential tool for coordination issues. In working or social situations where coordination is required, transparency is always going to improve cooperation. These findings point out how information develops trust and how trust increases cooperation toward a common objective. In this line, many firms choose to make their workers more conscious of the whole process they are involved in. Likewise, many researchers ease their survey participants a copy of the final research outcome they have participated in.

2.4. Social preferences

The last underlying assumption of the standard PGG is the preferences individuals have. Following the model, we are purely selfish individuals, only concerned about the material payoffs of our actions. This also entails that we are always absolutely capable of measuring and balancing monetary costs and benefits associated with each possible action. Some game theorists have criticized this consideration and have introduced concepts inherent to behavioral economics, in particular, ideas that have to do with social preferences.

The most well-known social preference model is the inequality aversion model [17]. This model states that only a proportion of the population has selfish preferences, while the rest of the individuals dislike inequitable outcomes. If their material payoff is lower than their peers, they suffer a disutility proportional to the distance between the payoffs (disadvantageous inequality). Additionally, they also experience disutility if their payoff is higher than that of their peers’ (advantageous inequality). Nevertheless, the first disutility is stronger than the second one: you do not like being worse off, you do not like being better off but if you had to choose, you would prefer to be better off. Individuals with inequality aversion will contribute more than what the NE for selfish individuals predicts if this can reduce the inequality between them. This model was proposed as an explanation to the cooperative behavior constantly observed in laboratory experiments and has proven to be fairly explanatory for most of them.

A social preference alternative to explain human behavior is reciprocity. Reciprocal agents are friendly to friendly peers and hostile to hostile peers [18]. Notice reciprocity is, therefore, positive and negative. It is a tit for tat, an eye for an eye. Many supermarkets expect reciprocity by offering free samples or discounts of their products. Moreover, in our social relationships, we are usually willing to return favors to those who have been kind to us at some point in time.

Finally, the most endpoint case of social preferences is altruism. Altruism or selflessness is the complete opposite of selfishness: instead of maximizing your own material payoff, you maximize the welfare of others. This kind of preference is harder to see at a working level but is commonly used to describe paternal love.

Social preferences are important because they exist in social relationships and actually explain why we behave as we do. In predesigned settings, like working environments, social concerns are fundamental in team working. If the manager can design working teams, he should have a deep knowledge of each person’s ethics when working together. An individual with high disadvantageous inequality concerns could feel highly frustrated if working with purely selfish colleagues.

Advertisement

3. Mechanisms to address the coordination issue

At this point, the reader should understand the relevance and problematics of coordination as well as the effects that variations in the basic assumptions of the model have. These variations, however, are usually endogenous features of the game rather than aspects we can influence on. In this section, we present exogenous mechanisms that change the game’s rules pursuing an increase in coordination and, consequently, in efficiency.

3.1. Reputation

Up until now, predictions have been made for games where interactions are unique, rather than prolonged over time. If we think about a working team, a social event with our family and friends or any kind of community we belong to, it is reasonable to assume that we will meet those people in the future and we may face similar situations where coordination becomes crucial again.

In this respect, game theory makes a clear distinction between games that are played only once (one-shot games) and games that are played for several periods of time (repeated games). Repeated games, at the same time, can also be divided into finitely repeated games and infinitely repeated games.

If a relationship is maintained over a predetermined period of time (finitely repeated games), the theoretical prediction for the PGG is the same as the one of the one-shot game. Notice that if individuals cooperate in this context it is because they expect to maintain this friendly relationship in the future by building a reputation. However, if there is a last period, a last day, a last meeting and a last task, there are no incentives to be friendly for tomorrow. Would you strive in your last day of work? This phenomenon is named the end-of-the-world effect, where cooperation drastically falls in the last period. Now, if there is going to be full free riding in the last day, your incentives to cooperate in the second-to-last day disappear, so do your incentives in the third-to-last day and so do your incentives in the first period.

The workaround for this is for there to be no last day or alternatively (given vital restrictions) individuals do not know when the game is going to end. Using game theory terminology, the game has an infinite horizon or there is a positive probability of the game ending. In this case, positive levels of cooperation can effectively be achieved.

In experiments carried out in the laboratory, the existence of repeated interactions is common and is combined with the social concerns inherent in each subject. This combination leads to positive levels of cooperation that decay as time goes by, leading to an inverse U shape (see Figure 1). If the number of periods they are going to interact with is known, an end-of-the-world effect is always noticeable. This implies that reputation is necessary but it is not sufficient as a mechanism to sustain cooperation over time.

Figure 1.

Standard result of public goods game experiments.

3.2. Step-level PGG

Another basic rule that can be changed to enhance cooperation is setting a particular threshold level necessary for the public good to be shared [19]. In other words, unless a minimum level of aggregate contribution is achieved, there are no public good advantages. For instance, recall the salesmen example seeking for a salary increase of Section 2. The unit manager could ask for a minimum sales target necessary for any salary increase to happen. If that threshold is sufficiently high, and the maximum effort of every member is necessary to achieve it, free riding is no longer attractive for them.

Obviously, a guaranteed success would be to set the threshold at maximum contribution such that there are no advantages on free riding. If we talk about an investment, this is feasible, as money is quantifiable. However, cases requiring human effort are more difficult to measure and compute.

Targets are natural in firm environments, where employees must meet a monthly, weekly or even daily goal. However, occasionally, these targets are settled too low such that part of the staff can achieve them by themselves without the need of high levels of coordination. Thus, it may incentivize the over-effort of some employees given that it is a take-it or leave-it approach. The definition of the correct targets is therefore essential in concealing the free rider problem.

3.3. Communication

The standard PGG is based on the premise of individuals deciding independently and simultaneously on their cooperation to the public good. However, it is ordinary to see, especially in working environments, how coworkers communicate among themselves. This communication opportunity has been repeatedly ascertained to increase the level of cooperation in PGG. Multiple communication mechanisms have been tested in the laboratory, such as non-binding face-to-face communication, audiovisual conferences, audio communication or e-mail communication, among others [20, 21, 22]—face-to-face communication being the most efficient mechanism. Nevertheless, this is not due to the loss of anonymity: verbal communication through an anonymous chat room has been demonstrated to be almost as efficient [23].

This result pinpoints the importance of enhancing a friendly environment where communication flows as part of a firm’s corporate culture. In this line, many companies implement regular informal meetings or outdoor activities as part of their staff’s routine for employee engagement. Furthermore, in order for it to be as binding as possible, the barriers between the interlocutors should be minimized.

3.4. Sanctioning

The most common mechanism to conceal the free rider issue and which counts with a vast theoretical and experimental literature is sanctioning. Sanctioning can be understood in many different ways: formal economic punishment in the form of a penalty, social punishment related to hostility or even breaking bonds in the working or personal domain. It is used with the purpose of smoothening the contribution nosedive in repeated PGG and, for some cases, reverts it.

According to purely selfish preferences, if punishment is costly, nobody should engage in such action. However, as social agents that we are, we do implement punishment, even in one-shot situations. The next question we propose is related to how punishment, as a matter of fact, is implemented. Should coworkers have the power to sanction each other? Should there be a responsible person in charge of doing so? Can coworkers then return the hostile behavior somehow? Should there be a certain level of agreement in a punishment decision?

The following section in this chapter tackles this issue by presenting different types of sanctioning schemes.

Advertisement

4. Sanctioning

4.1. Peer punishment

Peer punishment is the most standard way of implementing a mechanism to address coordination dilemma. Peer punishment consists of the opportunity for each individual to penalize, at the end of the game, those participants who have been free riders at a cost. This would be comparable to endowing coworkers with the possibility of punishing each other at the end of the day. Notice that this type of punishment is a public good itself, as everybody is better off if free riders are sanctioned, but they prefer the rest to undertake the cost of doing so.

Consider, as an example of a social setup, a group of friends meeting for dinner, where each one of them is expected to bring a dish and a beverage so that there is a variety of food and drink for dinner. If somebody free rides from their part of the contribution, but indeed benefits from what others have prepared, he could possibly not be invited again by his friends for future dinner parties as a form of social punishment. In a working scenario, coworkers could ostracize employees who free ride on effort exertion after an important task carried out by the team. At an economic level, that free riding employee could be penalized by the firm in terms of salary or even fired.

Experimental works have extensively proven that a combination of peer punishment, social preferences, and long-term interactions leads to higher contributions. The key result in this field is that peer punishment can indeed raise contributions to levels above those attainable in the absence of such punishments [18]. Furthermore, these improvements in terms of efficiency are also valued by individuals, who, if allowed to choose between a sanctioning environment and a sanction-free environment, establish themselves in the former one after a learning process [24]. Regarding the long-run effects, contributions reach significantly higher levels the greater as the number of periods subjects interact increases [25].

Many authors suggest that the ability of costly punishments to sustain high contributions to the public good depends crucially on the effectiveness of that punishment, that is, the factor by which each punishment point reduces the recipient’s payoff [26]. According to the seminal work in this area [18], the cost of peer punishment should follow an exponential trend. For low levels of punishment, it should be a 1-1 relationship, but as the impact of punishment increases, this relation becomes a 3-1, that is, the cost the punisher bears is thrice the impact the punished undertakes. Other studies, however, argue that for punishment to make a difference, it must inflict a penalty that is substantially higher than the cost of meting out that punishment. In particular, they assert that the only punishment treatment that succeeds in sustaining cooperation over time is the low-cost high-impact treatment [27]. In particular, following [28], the cost-effectiveness ratio should be no less than 1-3 [28]. That is, in fact, the inverse of the seminal punishment model in [18]: for every unit of utility that is deducted from the punisher’s payoff, the punished individual should have his utility reduced in three units.

What we pick up from this is that peer punishment has the power of smoothening the coordination issue as long as the cost-effectiveness ratio is suitable and relations are maintained over time. Regarding the dinner party, the free rider will have higher incentives to bring a dish if they have scheduled more dinner parties for the next months and he identifies the risk of not being invited anymore.

4.2. Counter-punishment

Counter-punishment, also known as perverse punishment, is a second-round punishment phase, where sanctioned free riders can penalize their punishers back. If one allows the possibility of counter-punishment by punished free riders, cooperators will be less willing to punish in first instance [29]. While with peer punishment, contributors use punishment as a signal of not accepting low contributions in the future, counter-punishment is used to strategically signal that future sanctions will not be tolerated. This way, peer punishment is reduced and contributions show a decaying pattern. However, counter-punishment also has its bright side if used in a good way. On the one hand, it can be used to sanction those who fail to sanction free riders, in other words, those who have free-ridden on punishment. On the other hand, it can also be used to penalize those who have exerted coercive punishment by sanctioning high contributions [30]. However, fairness concerns are necessary for this type of punishment to be used in this way.

In a company setting, counter-punishment could occur if a group of coworkers believed that somebody is being too hostile with the novice who did not rise to the challenge in a first attempt. This way, they could also decide to exclude the punitive coworker when organizing the next outdoor activity.

4.3. Coordinated punishment

Individual effective punishment is sometimes not very truthful. In real life, it is usual that a certain number of individuals are needed to effectively sanction opportunistic behavior. Everyday examples of this condition are worker strikes, a state coup or any kind of boycott. In this sense, coordinated punishment is implemented in the following way: at the end of the game, players individually decide whether to punish or not to punish opportunists, forwarding that if they succeed in reaching a threshold in the number of punishers, the damage inflicted can be very large and the individual cost of coordinated punishment can be relatively low.

Following this approach, coordinated punishment can be effective if the threshold that must be reached is sufficiently high. According to [31], coordinated punishment performs remarkably better than peer punishment when the requirement to punish a person is the emergence of a coalition of at least 40% of the group members. Authors associate the effectiveness of coordinated punishment with its ability to censor coercive punishment of the higher contributors, which, as a matter of fact, was relatively frequent in their experiment.

Coordinated punishment has also been revealed to be effective in other kinds of social dilemmas like team trust games, also called team investment games. The common method of these kinds of games is as follows. Subjects are assorted into groups of three, from which two are assigned the role of investors and one is assigned the role of the allocator. In the first stage of the game, the two investors must decide whether to invest or not in a common project, which is only successful if both of them invest. Such project generates a surplus, from which the allocator decides how much to return to each investor and how much to keep for himself, in the second stage of the game. A punishment stage could be added to this standard team trust game in next place. If this punishment scheme follows the basics of coordinated punishment such that both investors must coordinate to sanction the allocator for there to be an effective punishment, cooperation can be maintained [32].

Around us, there are many situations where coordinated punishment occurs, situations where a certain level of agreement must be attained for punishment to be effective. Think about any type of social community, where one of the members has misbehaved and the community is considering expelling the mischievous member. In this case, communities frequently undertake some kind of voting procedure for such decisions.

4.4. Pool punishment

There are numerous situations where punishment is not individually decided once the outcomes are observed but must be agreed upon before the game even starts. This way, individuals commit to punishment actions and there is no place for any kind of renegotiation of the conditions or of backing down. This reflects how investments in monitoring and sanctioning institutions to uphold the common interests are made.

In this line, different studies have explored how individuals indeed implement institutions of this type, if offered such possibility. The credible threat of this institution sanctioning opportunistic behavior at the end of the day enhances individual cooperation, which in turn has positive effects on group cooperation [33, 34]. This effect is even more pronounced with the option of counter-punishment [35].

In the last years, the comparison between peer and pool punishment has caught attention. With the purpose of overcoming difficulties and inefficiencies related to individual punishment (like the coercive punishment of high contributors we saw before), groups have continuously developed forms of self-regulation, where sanctioning is delegated to a central authority [36, 37]. Examples are specialized law forces such as the police, courts, state and non-state institutions. Hence, it could be said that it is the punishment option preferred by individuals [35, 38].

At a firm level, the organizational hierarchy limits the sanctioning power. Besides all the examples we have provided about social punishment between coworkers, actual penalizing decisions come from higher bodies. A coworker can never fire you, a CEO can. In this sense, the commitment of the application of sanctions is usually regulated by a series of protocols and internal regulation specifying the consequences of unruly behavior in detriment of the firm. At a societal level, the same applies; you cannot economically sanction your neighbor for tax payment default or illegal parking. The most you can do is to report it to the relevant authorities for them to make use of their power.

The objective of all of these pre-designed rules that surround us is exactly to increase cooperation and avoid free riding. If the threat that we are going to be certainly caught and punished were large enough, prisons would be empty.

Advertisement

5. Conclusions

When we interact with other people we constantly face coordination dilemmas: in our neighborhoods, with our families, with our friends or with the people we work with. We should all put the best of ourselves so that everything works properly but there is always somebody who decides to free ride on others’ money or effort. Think about a supply chain selling a product you want to buy through different channels. You can go to a local retailer to see the product, obtain information about it or even test it. However, when you get home, you are going to buy it online. The online supplier is indirectly benefiting from the service of the local retailer. We are all free riders at some point.

However, this opportunistic response is not associated with any kind of particular mischief; it is just a selfish reaction to a cooperative situation with a non-excludable outcome. Who is willing to organize the next family trip? Fortunately, the world population is not composed uniquely of selfish individuals who look the other way; most of us have social concerns for inequality, reciprocity or even altruism. This conditions how we behave in all of the described situations and brings to the surface at least someone willing to cooperate by taking the lead in planning the next trip.

Nonetheless, this is not enough. If we want to conceal the free rider problem and enhance further cooperation, we should try and form groups of people that share the same capabilities, interests, motivations, and ethics. Relationships should be maintained for as long as possible so that there is a better tomorrow for which everybody wants to fight today. Considering the multichannel supply chain example presented before, if you trust your local retailer, you could prefer to buy the product from him than from the unknown online supplier. Additionally, a sanctioning mechanism would also be helpful.

Punishment opportunities are present in most of the interactions we talk about. Punishing must not necessarily be an economic action, it can just be a social response of hostility, ostracism or bond breaking. Generally speaking, if any sort of sanctioning is at reach for the group members, cooperation significantly increases. In more detail, punishment can either be an individual decentralized decision or it can be a power endowed to a centralized authority, like a government. For interactions involving numerous agents, the establishment of hierarchies with different responsibility levels is a more feasible way of ensuring collective cooperation. But we do not need to go to massive populations to find such hierarchies: any task that requires a team working at any small-scale enterprise will already need a “good” manager.

References

  1. 1. Chan KS, Mestelman S, Moir R, Muller RA. The voluntary provision of public goods under varying income distributions. Canadian Journal of Economics. 1996:54-69. DOI: 10.2307/136151
  2. 2. Buckley E, Croson R. Income and wealth heterogeneity in the voluntary provision of linear public goods. Journal of Public Economics. 2006;90(4-5):935-955. DOI: 10.1016/j.jpubeco.2005.06.002
  3. 3. Heap SPH, Ramalingam A, Stoddard BV. Endowment inequality in public goods games: A re-examination. Economics Letters. 2016;146:4-7. DOI: 10.1016/j.econlet.2016.07.015
  4. 4. Chan KS, Mestelman S, Moir R, Muller RA. Heterogeneity and the voluntary provision of public goods. Experimental Economics. 1999;2(1):5-30. DOI: 10.1023/A:1009984414401
  5. 5. Cherry TL, Kroll S, Shogren JF. The impact of endowment heterogeneity and origin on public good contributions: Evidence from the lab. Journal of Economic Behavior & Organization. 2006;57(3):357-365. DOI: 10.1016/j.jebo.2003.11.010
  6. 6. Fellner-Röhling G, Iida Y, Kröger S, Seki E. Heterogeneous Productivity in Voluntary Public Good Provision: An Experimental Analysis. IZA Discussion Paper No. 5556. 2014
  7. 7. Kölle F. Heterogeneity and cooperation: The role of capability and valuation on public goods provision. Journal of Economic Behavior & Organization. 2015;109:120-134. DOI: 10.1016/j.jebo.2014.11.009
  8. 8. Isaac RM, Walker JM. Group size effects in public goods provision: The voluntary contributions mechanism. The Quarterly Journal of Economics. 1988;103(1):179-199. DOI: 10.2307/1882648
  9. 9. Barcelo H, Capraro V. Group size effect on cooperation in one-shot social dilemmas. Scientific Reports. 2015;5:7937. DOI: 10.1038/srep07937
  10. 10. Isaac RM, Walker JM, Williams AW. Group size and the voluntary provision of public goods: Experimental evidence utilizing large groups. Journal of Public Economics. 1994;54(1):1-36. DOI: 10.2307/1882648
  11. 11. Sell J, Wilson RK. Levels of information and contributions to public goods. Social Forces. 1991;70(1):107-124. DOI: 10.1093/sf/70.1.107
  12. 12. Bigoni M, Suetens S. Feedback and dynamics in public good experiments. Journal of Economic Behavior & Organization. 2012;82(1):86-95. DOI: 10.1016/j.jebo.2011.12.013
  13. 13. Nikiforakis N. Feedback, punishment and cooperation in public good experiments. Games and Economic Behavior. 2010;68(2):689-702. DOI: 10.1016/j.geb.2009.09.004
  14. 14. Faillo M, Grieco D, Zarri L. Legitimate punishment, feedback, and the enforcement of cooperation. Games and Economic Behavior. 2013;77(1):271-283. DOI: 10.1016/j.geb.2012.10.011
  15. 15. Rege M, Telle K. The impact of social approval and framing on cooperation in public good situations. Journal of Public Economics. 2004;88(7):1625-1644. DOI: 10.1016/S0047-2727(03)00021-5
  16. 16. Cookson R. Framing effects in public goods experiments. Experimental Economics. 2000;3(1):55-79. DOI: 10.1007/BF01669207
  17. 17. Fehr E, Schmidt KM. A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics. 1999;114(3):817-868. DOI: 10.1162/003355399556151
  18. 18. Fehr E, Gächter S. Fairness and retaliation: The economics of reciprocity. Journal of Economic Perspectives. 2000;14(3):159-181. DOI: 10.1257/jep.14.3.159
  19. 19. Rapoport A. Provision of step-level public goods: Effects of inequality in resources. Journal of Personality and Social Psychology. 1988;54(3):432. DOI: 10.1037/0022-3514.54.3.432
  20. 20. Isaac RM, Walker JM. Communication and free-riding behavior: The voluntary contribution mechanism. Economic Inquiry. 1988;26(4):585-608. DOI: 10.1111/j.1465-7295.1988.tb01519.x
  21. 21. Brosig J, Weimann J, Ockenfels A. The effect of communication media on cooperation. German Economic Review. 2003;4(2):217-241. DOI: 10.1111/1468-0475.00080
  22. 22. Frohlich N, Oppenheimer J. Some consequences of e-mail vs. face-to-face communication in experiment. Journal of Economic Behavior & Organization. 1998;35(3):389-403. DOI: 10.1016/S0167-2681(98)00044-4
  23. 23. Bochet O, Page T, Putterman L. Communication and punishment in voluntary contribution experiments. Journal of Economic Behavior & Organization. 2006;60(1):11-26. DOI: 10.1016/j.jebo.2003.06.006
  24. 24. Gürerk Ö, Irlenbusch B, Rockenbach B. The competitive advantage of sanctioning institutions. Science. 2006;312(5770):108-111. DOI: 10.1126/science.1123633
  25. 25. Gächter S, Renner E, Sefton M. The long-run benefits of punishment. Science. 2008;322(5907):1510-1510. DOI: 10.1126/science.1164744
  26. 26. Nikiforakis N, Normann HT. A comparative statics analysis of punishment in public-good experiments. Experimental Economics. 2008;11(4):358-369. DOI: 10.1007/s10683-007-9171-3
  27. 27. Egas M, Riedl A. The economics of altruistic punishment and the maintenance of cooperation. Proceedings of the Royal Society of London B: Biological Sciences. 2008;275(1637):871-878. DOI: 10.1098/rspb.2007.1558
  28. 28. Casari M. On the design of peer punishment experiments. Experimental Economics. 2005;8(2):107-115. DOI: 10.1007/s10683-005-0869-9
  29. 29. Nikiforakis N. Punishment and counter-punishment in public good games: Can we really govern ourselves? Journal of Public Economics. 2008;92(1-2):91-112. DOI: 10.1016/j.jpubeco.2007.04.008
  30. 30. Denant-Boemont L, Masclet D, Noussair CN. Punishment, counterpunishment and sanction enforcement in a social dilemma experiment. Economic Theory. 2007;33(1):145-167. DOI: 10.1007/s00199-007-0212-0
  31. 31. Casari M, Luini L. Cooperation under alternative punishment institutions: An experiment. Journal of Economic Behavior & Organization. 2009;71(2):273-282. DOI: 10.1016/j.jebo.2009.03.022
  32. 32. Olcina G, Calabuig V. Coordinated punishment and the evolution of cooperation. Journal of Public Economic Theory. 2015;17(2):147-173. DOI: 10.1111/jpet.12090
  33. 33. Kosfeld M, Okada A, Riedl A. Institution formation in public goods games. American Economic Review. 2009;99(4):1335-1355. DOI: 10.1257/aer.99.4.1335
  34. 34. Ozono H, Jin N, Watabe M, Shimizu K. Solving the second-order free rider problem in a public goods game: An experiment using a leader support system. Scientific Reports. 2016;6:38349. DOI: 10.1038/srep38349
  35. 35. Traulsen A, Röhl T, Milinski M. An economic experiment reveals that humans prefer pool punishment to maintain the commons. Proceedings of the Royal Society B. 2012. DOI: 1098/rspb.2012.0937
  36. 36. Baldassarri D, Grossman G. Centralized sanctioning and legitimate authority promote cooperation in humans. Proceedings of the National Academy of Sciences. 2011;108(27):11023-11027. DOI: 10.1073/pnas.1105456108
  37. 37. Fehr E, Williams T. Creating an Efficient Culture of Cooperation. Mimeo; 2017
  38. 38. Sigmund K, De Silva H, Traulsen A, Hauert C. Social learning promotes institutions for governing the commons. Nature. 2010;466(7308):861. DOI: 10.1038/nature09203

Written By

Adriana Alventosa and Penélope Hernández

Submitted: 19 April 2018 Reviewed: 25 May 2018 Published: 05 November 2018