Thursday 16 October 2008

Think Different

Video



Here's to the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square hole, the ones who see things differently. They're not fond of rules, and they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them, 'bout the only thing you can't do is ignore them. Because they change things, they push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.

The Best Advice I Ever Got: Michelle Peluso, President and Chief Executive Officer, Travelocity

A few months before I was born, my father founded an environmental-engineering firm. I literally grew up watching him build it. Even as a little kid, I was struck by Dad’s obsessive interest in and care for the people who worked for him.

Nights when our family’s dinner-table conversation didn’t include discussion of his employees were rare. “Sally’s gotten accepted into an MBA program,” he’d say excitedly, “and we’re going to figure out how she can do that part-time.” Or “John’s wife just had a baby girl! We’re going over this weekend to see her.” Before company picnics we were thoroughly briefed: Bill had just won a new client; Mary was about to make an important presentation. His concern was authentic and unwavering, and it extended to all aspects of his employees’ lives. When two of his top employees were killed in a plane crash coming back from a business trip, Dad spent time with their grieving families.

Now, my father’s attitude and behavior were just part of his personality, not some maneuver to produce results—but they produced them all the same. He grew that start-up into a thriving 300-person business and then sold it to a larger company but continued to run it successfully. Two years ago, when he left to begin a new venture, more than half of his former employees sent him their résumés. So although my father never gave me management advice directly, his example provided a profound lesson: If you treat your employees as unique individuals, they’ll be loyal to you and they’ll perform—and your business will perform, too.

The longer I’m in my own career, the more I attempt to put that lesson into practice. At a 5,000-person global organization, I simply can’t know everyone personally. But I can apply my dad’s techniques in a scaled-up way that lets me know as many people as possible, that encourages managers to do the same, and that makes our employees generally feel that this is a place where someone’s looking out for them. I often visit our different offices; I hold brown-bag lunches every week; I regularly e-mail the whole staff about what’s going well and what needs to improve; I hold quarterly talent management sessions with my direct reports; and I constantly walk the halls. When anyone at Travelocity e-mails me, I respond within 24 hours. I read every single word of our annual employee survey results and of my managers’ 360-degree performance feedback—and I rate those managers in large part on how well they know and lead their own people.

Focusing on individuals instead of “the team” isn’t easy. It takes a lot of time and genuine caring, and requires a long-term view—which can be tough when you’ve got a big business to run. Ultimately, however, it’s worth the effort, for your employees and for the organization. A few years ago one of our senior managers was leading a huge project that had high visibility with our investor community when she began having pregnancy complications. I fully supported her in taking several months off, for her own health and for her kids. It was a daunting time, but the team worked through it, and that manager is still working here—as our COO.

I describe my leadership style in all humility. I don’t have all the answers on how to lead people, and I learn from colleagues every day. But I share Dad’s entrepreneurial belief: People aren’t your “greatest asset”—they’re your only asset.

Via: Harvard Business

Protect Your Product’s Look and Feel from Imitators

Too many U.S. companies believe that being first to market with a design feature, whether it’s registered as a trademark or not, is the best way to ensure that your brand is associated with it. That’s a false assumption—and a dangerous one. Instead companies must ensure that consumers connect a product’s look and feel with the brand. That means conducting targeted research on design features and, in many cases, spending more money to hammer home the brand association in consumers’ minds.

A design feature is what’s known legally as trade dress—any nonfunctional characteristic of a product’s or package’s appearance and feel, ranging from the pink color of Corning’s insulation to the cowhide pattern on Gateway’s computer boxes. Many companies don’t know how much trade dress is worth and therefore can’t make informed decisions about how much to spend to protect it from being copied. And firms, especially small and midsize ones, often don’t register design features as trademarks because meeting official requirements can be costly and difficult. Often they believe that the chances of being copied are slim or that they can successfully sue an imitator because they originated a feature.

But the legal climate for brands changed in 2000, when the Supreme Court ruled that because Samara Brothers, a New Jersey–based wholesaler, could provide no evidence that consumers associated its children’s-clothing designs with only one brand source, Wal-Mart could sell items that looked a lot like Samara’s. A product’s design feature can now be protected from imitation only if it has “secondary meaning”—if buyers see it as a marker of the brand. That meaning must be acquired through marketing.

Fortunately, it’s not difficult for a company to get the data it needs to help build a bulwark against imitation. It can conduct a simple experiment to determine what percentage of consumers associate a feature with the brand and whether the feature is valuable enough to be worth the effort of spreading that association to more consumers.

Say, for example, a maker of western boots with decorative stitching hires a research firm to conduct an experiment with current or potential buyers in five widely distributed malls, out of sight of the company’s outlets. The researchers show half the shoppers the (unlabeled) decorated boots and the other half the same boots without the design feature. Then they ask the participants in both groups what they’d be willing to pay for a pair. The difference in the average price quoted by the two groups becomes the per-unit value of the stitching. Finally, they ask the participants to name the brand. Typically, few participants can do that—which may surprise the company.

Suppose the stitching added $40 to the perceived value of the boots, but the fraction of people who could identify the brand was only five to 10 percentage points higher in the with-stitching group than in the without-stitching group. The company would need to create a campaign to strengthen the association between the feature and the brand. That might mean increasing the marketing budget as much as threefold, at least for a while, but it might also discourage imitators and improve the likelihood of success in a legal contest. A difference of at least 20 percentage points that can be attributed solely to the design feature is usually considered a solid defense against imitation if the company sues a competitor.

Research data of this type are becoming common in infringement lawsuits. But companies should be collecting data even if no imitators are on the horizon, because trade dress is an increasingly important asset. The rising value of brands is illustrated by the $305 million in damages awarded to Adidas in May 2008 for imitation of its athletic shoes—believed to be the largest amount ever awarded for trademark infringement. Valuable assets like trade dress can be managed rationally only if their value is fully understood. In effect, what you don’t know will hurt you.

Via: Harvard Business

Monday 13 October 2008

Say's law

In economicsSay’s Law or Say’s Law of Markets is a principle attributed to French businessman and economist Jean-Baptiste Say (1767-1832) stating that there can be no demandwithout supply. A central element of Say's Law is that recession does not occur because of failure in demand or lack of money. The more goods (for which there is demand) that are produced, the more those goods (supply) can constitute a demand for other goods. For this reason, prosperity should be increased by stimulating production, not consumption. In Say's view, creation of more money simply results in inflation; more money demanding the same quantity of goods does not represent an increase in real demand.

Contents

 [hide]

[edit]Say's formulation

James Mill restates Say's Law as "production of commodities creates, and is the one and universal cause which creates a market for the commodities produced". In Say's language, "products are paid for with products" (1803: p.153) or "a glut can take place only when there are too many means of production applied to one kind of product and not enough to another" (1803: p.178-9). Explaining his point at length, he wrote that:

It is worthwhile to remark that a product is no sooner created than it, from that instant, affords a market for other products to the full extent of its own value. When the producer has put the finishing hand to his product, he is most anxious to sell it immediately, lest its value should diminish in his hands. Nor is he less anxious to dispose of the money he may get for it; for the value of money is also perishable. But the only way of getting rid of money is in the purchase of some product or other. Thus the mere circumstance of creation of one product immediately opens a vent for other products. (J.B. Say, 1803: p.138-9) [1]

He also wrote:

It is not the abundance of money but the abundance of other products in general that facilitates sales... Money performs no more than the role of a conduit in this double exchange. When the exchanges have been completed, it will be found that one has paid for products with products.

Say argued against claims that business was suffering because people did not have enough money and more money should be printed. Say argued that the power to purchase could be increased only by more production. James Mill used Say's Law against those who sought to give economy a boost via unproductive consumption. Consumption destroys wealth, in contrast to production which is the source of economic growth. The demand for the product determines the price of the product, but not if it will be consumed.

It is important to note that Say himself never used many of the later short definitions of Say's Law and that Say's Law actually developed due to the work of many of his contemporaries and those who came after him. The work of James Mill, David RicardoJohn Stuart Mill, and others evolved into what is sometimes called "law of markets" which was the framework ofmacroeconomics from mid 1800s until the 1930s.

[edit]Recession and unemployment

Keynes (see more below) claimed that according to Say's Law, involuntary unemployment cannot exist due to inadequate aggregate demand. However, involuntary unemployment could be explained in a different way by the 19th century economists, and the neoclassical economists actually used Say's Law to understand and explain even long-term unemployment and recession.

Recession was explained as arising from production not meeting demand in quality. While in general, more is not produced than there could be demand for, some particular products are produced too much and consequently other products too little. This "disproportionality" in relation to the consumer preferences would lead to a producer not being able to sell the products in cost-covering prices, causing losses and the closing of several firms. Since demand is ultimately determined by supply, the reduction in supply of these isolated sectors of the economy will reduce the demand for products in the other sectors, causing a general reduction in output.

Such economic losses and unemployment were seen as an intrinsic property of the capitalistic system. Division of labour leads to a situation where one always has to anticipate what others will be willing to buy, and this will lead to miscalculations. However this theory alone does not explain the existence of cyclical phenomena in the economy because these miscalculations would happen with constant frequency. Some economists developed a theory of business cycles that tries to explain the business cycle as a cluster of errors of anticipation of demand caused by the credit expansion.

The kind of unemployment that results is what modern macroeconomics calls "structural unemployment". It differs from Keynesian "cyclical unemployment" that arises due to aggregate demand failure.

[edit]Role of money

It is not easy to say what exactly Say's Law says about the role of money apart from the claim that recession is not caused by lack of money. One can read the second long quotation by Say (see above) as stating simply that money is completely neutral, although Say did not concern himself about the question. The central notion that Say had concerning money can be seen in the first long quotation above. If one has money, it is irrational to hoard it.

To understand the role of this notion, restate Say's Law. To Say, as with other Classical economists, it is quite possible for there to be a glut (excess supply, market surplus) for one product, and it co-exists with a shortage (excess demand) for others. But there is no "general glut" in Say's view, since the gluts and shortages cancel out for the economy as a whole. But what if the excess demand is for money, because people are hoarding it? This creates an excess supply for all products, a general glut. Say's answer is simple: there is no reason to engage in hoarding. To quote Say from above:

Nor is [an individual] less anxious to dispose of the money he may get ... But the only way of getting rid of money is in the purchase of some product or other.

The only reason to have money, in Say's view, is to buy products. It would not be a mistake, in his view, to treat the economy as if it were a barter economy.

An alternative view is that all money that is held is done so in financial institutions (markets), so that any increase in the holding of money increases the supply of loanable funds. Then, with full adjustment of interest rates, the increased supply of loanable funds leads to an increase in borrowing and spending. So any negative effects on demand that results from the holding of money is canceled out and Say's Law still applies.

In Keynesian terms, followers of Say's Law would argue that on the aggregate level, there is only a transactions demand for money. That is, there is no precautionaryfinance, orspeculative demand for money. Money is held for spending and increases in money supplies lead to increased spending.

Classical economists did see that loss of confidence in business or collapse of credit will increase the demand for money which would cut down the demand for goods. This view was expressed both by Robert Torrens and John Stuart Mill. This would lead to demand and supply to move out of phase and lead to an economic downturn in the same way as miscalculation in productions, as described by William H. Beveridge in 1909.

However, in Classical economics, there was no reason for such a collapse to persist. Persistent depressions, such as that of the 1930s, are impossible according to laissez-faireprinciples. The flexibility of markets under laissez faire allow prices, wages, and interest rates to adjust to abolish all excess supplies and demands.

[edit]Modern interpretations

A modern way of expressing Say's Law is that there can never be a general glut.[2] Instead of there being an excess supply (glut or surplus) of goods in general, there may be an excess supply of one or more goods but only when balanced by an excess demand (shortage) of yet other goods. Thus, there may be a glut of labor ("cyclical" unemployment), but that is balanced by an excess demand for produced goods. Modern advocates of Say's Law see market forces as working quickly—via price adjustment—to abolish both gluts and shortages. The exception would be the case where the government or other non-market forces prevent price changes.

According to Keynes, the implication of Say's "law" is that a free-market economy is always at what the Keynesian economists call full employment. Thus, Say's Law is part of the general world-view of laissez-faire economics, i.e., that free markets can solve the economy's problems automatically. (Here the problems are recessions, stagnation, depression, and involuntary unemployment.) There is no need for any intervention by the government or the central bank—such as the U.S. Federal Reserve—to help the economy attain full employment. All that the central bank needs to be concerned with is the prevention of inflation.

In fact, some proponents of Say's Law argue that such intervention is always counterproductive. Consider Keynesian-type policies aimed at stimulating the economy. Increased government purchases of goods (or lowered taxes) merely "crowds out" the private sector's production and purchase of goods. To contradict this, Arthur Cecil Pigou—a self-proclaimed follower of Say's Law—wrote a letter in 1932 signed by five other economists (among them Keynes) calling for more public spending to alleviate high levels of unemployment.

From a modern macroeconomic viewpoint Say's Law is subject to dispute. John Maynard Keynes and many other critics of Say's Law have (incorrectly) paraphrased it as saying that "supply creates its own demand". Under this definition, once a producer has created a supply of a product, consumers will inevitably start to demand it. This interpretation allowed for Keynes to introduce his alternative perspective that "demand creates its own supply" (up to, but not beyond, full employment). Some call this "Keynes' law".

[edit]Keynes vs. Say

Keynesian economics places central importance on demand, believing that on the macroeconomic level, the amount supplied is primarily determined by effective demand or aggregate demand. For example, without sufficient demand for the products of labor, the availability of jobs will be low; without enough jobs, working people will receive inadequate income, implying insufficient demand for products. Thus, an aggregate demand failure involves a vicious circle: if I supply more of my labor-time (in order to buy more goods), I may be frustrated because no-one is hiring — because there is no increase in the demand for their products until after I get a job and earn an income. (Of course, most get paid after working, which occurs after some of the product is sold.) Note also that unlike the Say's law story above, there are interactions between different markets (and their gluts and shortages) that go beyond the simple price mechanism, to limit the quantity of jobs supplied and the quantity of products demanded.

Keynesian economists also stress the role of money in negating Say's Law. (Most would accept Say's Law as applying in a non-monetary or barter economy.) Suppose someone decides to sell a product without immediately buying another good. This would involve hoarding, increases in one's holdings of money (say, in a savings account). At the same time that it causes an increased demand for money, this would cause a fall in the demand for goods and services (an undesired increase in inventories (unsold goods) and thus a fall in production, if prices are rigid). This general glut would in turn cause a fall in the availability of jobs and the ability of working people to buy products. This recessionary process would be cancelled if at the same time there were dishoarding, in which someone uses money in his hoard to buy more products than he or she sells. (This would be a desired accumulation of inventories.)

Some classical economists suggested that hoarding would always be balanced by dishoarding. But Keynes and others argued that hoarding decisions are made by different people and for different reasons than decisions to dishoard, so that hoarding and dishoarding are unlikely to be equal at all times. (More generally, this is seen in terms of the equality of saving (abstention from purchase of goods) and investment in goods.)

Some have argued that financial markets and especially interest rates could adjust to keep hoarding and dishoarding equal, so that Say's Law could be maintained, or that prices could simply fall, to prevent a decrease in production. (See the discussion of "excess saving" under "Keynesian economics".) But Keynes argued that in order to play this role, interest rates would have to fall rapidly and that there were limits on how quickly and how low they could fall (as in the liquidity trap). To Keynes, in the short run, interest rates were determined more by the supply and demand for money than by saving and investment. Before interest rates could adjust sufficiently, excessive hoarding would cause the vicious circle of falling aggregate production (recession). The recession itself would lower incomes so that hoarding (and saving) and dishoarding (and real investment) could attain balance below full employment.

Worse, a recession would hurt private real investment, by hurting profitability and business confidence, in what is called the accelerator effect. This means that the balance between hoarding and dishoarding would be even further below the full employment level of production.

Keynesians believe that this kind of vicious circle can be broken by stimulating the aggregate demand for products using various macroeconomic policies mentioned in the introduction above. Increases in the demand for products leads to increased supply (production) and an increased availability of jobs, and thus further increases in demand and in production. This cumulative causation is called the multiplier process.

[edit]Modern Adherents of Say's Law

Economists such as Thomas Sowell (who wrote his doctoral dissertation on the idea) of the Chicago School have advocated Say's Law. Arthur Laffer, the supply-sider, also adhered to the law, as does the Austrian School.

Ultimatum game

he ultimatum game is an experimental economics game in which two players interact to decide how to divide a sum of money that is given to them. The first player proposes how to divide the sum between themselves, and the second player can either accept or reject this proposal. If the second player rejects, neither player receives anything. If the second player accepts, the money is split according to the proposal. The game is played only once, and anonymously, so that reciprocation is not an issue.

Contents

 [hide]

[edit]Equilibrium analysis

For illustration, we will suppose there is a smallest division of the good available (say 1 cent). Suppose that the total amount of money available is x.

The first player chooses some amount p in the interval [0,x]. The second player chooses some function f: [0, x] → {"accept", "reject"} (i.e. the second chooses which divisions to accept and which to reject). We will represent the strategy profile as (pf), where p is the proposal and f is the function. If f(p) = "accept" the first receives p and the second x-p, otherwise both get zero. (pf) is a Nash equilibrium of the Ultimatum game if f(p) = "accept" and there is no y > p such that f(y) = "accept" (i.e. p is the largest amount the second will accept the first receiving). The first player would not want to unilaterally increase his demand since the second will reject any higher demand. The second would not want to reject the demand, since he would then get nothing.

There is one other Nash equilibrium where p = x and f(y) = "reject" for all y>0 (i.e. the second rejects all demands that gives the first any amount at all). Here both players get nothing, but neither could get more by unilaterally changing his / her strategy.

However, only one of these Nash equilibria satisfies a more restrictive equilibrium conceptsubgame perfection. Suppose that the first demands a large amount that gives the second some (small) amount of money. By rejecting the demand, the second is choosing nothing rather than something. So, it would be better for the second to choose to accept any demand that gives her any amount whatsoever. If the first knows this, he will give the second the smallest (non-zero) amount possible.[1]

[edit]Experimental results

In many cultures, people offer "fair" (i.e., 50:50) splits, and offers of less than 20% are often rejected.[2] Research on monozygotic and dizygotic twins has shown that individual variation in reactions to unfair offers is partly genetic. [3]

[edit]Explanations

The results (along with similar results in the Dictator game) are taken to be evidence against the Homo economicus model of individual decisions. Since an individual who rejects a positive offer is choosing to get nothing rather than something, that individual must not be acting solely to maximize his economic gain. Several attempts to explain this behavior are available. Some authors suggest that individuals are maximizing their expected utility, but money does not translate directly into expected utility.[4] Perhaps individuals get some psychological benefit from engaging in punishment or receive some psychological harm from accepting a low offer.

The classical explanation of the Ultimatum game as a well-formed experiment approximating general behaviour often leads to a conclusion that the Homo economicus model of economic self-interest is incomplete. However, several competing models suggest ways to bring the cultural preferences of the players within the optimized utility function of the players in such a way as to preserve the utility maximizing agent as a feature of microeconomics. For example, researchers have found that Mongolian proposers tend to offer even splits despite knowing that very unequal splits are almost always accepted. Similar results from other small-scale societies players have led some researchers to conclude that "reputation" is seen as more important than any economic reward.[5] Another way of integrating the conclusion with utility maximization is some form of Inequity aversion model (preference for fairness). Even in anonymous one-shot setting, the economic-theory suggested outcome of minimum money transfer and acceptance is rejected by over 80% of the players. This is true whether the players are on placebo or are infused with a hormone that makes them more generous in the ultimatum game.[6][7]

An explanation which was originally quite popular was the "learning" model, in which it was hypothesized that proposers’ offers would decay towards the sub game perfect NE (almost zero) as they mastered the strategy of the game. (This decay tends to be seen in other iterated games). However, this explanation (bounded rationality) is less commonly offered now, in light of empirical evidence against it.[8]

It has been hypothesised (e.g. by James Surowiecki) that very unequal allocations are rejected only because the absolute amount of the offer is low. The concept here is that if the amount to be split were ten million dollars a 90:10 split would probably be accepted rather than spurning a million dollar offer. Essentially, this explanation says that the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour. However, many experiments have been performed where the amount offered was substantial: studies by Cameron and Hoffman et al. have found that the higher the stakes are the closer offers approach an even split, even in a 100 USD game played in Indonesia, where average 1995 per-capita income was 670 USD. Rejections are reportedly independent of the stakes at this level, with 30 USD offers being turned down in Indonesia, as in the United States, even though this equates to two week's wages in Indonesia.[9]

[edit]Neurologic Explanations

Generous offers in the Ultimatum Game (offers exceeding the minimum acceptable offer) are commonly made. Zak, Stanton & Ahmadi (2007) [10] showed that two factors can explain generous offers: empathy and perspective taking. They varied empathy by infusing participants with intranasal oxytocin or placebo (blinded). They affected perspective-taking by asking participants to make choices as both player 1 and player 2 in the Ultimatum Game, with later random assignment to one of these. Oxytocin increased generous offers by 80% relative to placebo. Oxytocin did not affect the minimum acceptance threshold or offers in the Dictator Game (meant to measure altruism). This indicates that emotions drive generosity.

Rejections in the Ultimatum Game have been shown to be caused by adverse physiologic reactions to stingy offers [11]. In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers) differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral disgust. If Player 1 in the Ultimatum Game anticipates this response to a stingy offer, they may be more generous.

People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with normal serotonin levels.[12]

[edit]Evolutionary game theory

Other authors have used evolutionary game theory to explain behavior in the Ultimatum Game.[13] Simple evolutionary models, e.g. the replicator dynamics, cannot account for the evolution of fair proposals or for rejections. These authors have attempted to provide increasingly complex models to explain fair behavior.

[edit]Sociological applications

The split dollar game is important from a sociological perspective, because it illustrates the human willingness to accept injustice and social inequality.

The extent to which people are willing to tolerate different distributions of the reward from "cooperative" ventures results in inequality that is, measurably, exponential across the strata ofmanagement within large corporations. See also: Inequity aversion within companies.

Some see the implications of the Ultimatum game as profoundly relevant to the relationship between society and the free market, with Prof. P.J. Hill, (Wheaton College (Illinois)) saying:

“I see the [ultimatum] game as simply providing counter evidence to the general presumption that participation in a market economy (capitalism) makes a person more selfish.”[14]

[edit]History

The first Ultimatum game was developed in 1982 as a stylized representation of negotiation, by Güth, Werner, Schmittberger, and Schwarze.[15] It has since become the most popular of the standard Experiments in economics, and is said to be "catching up with the Prisoner's dilemma as a prime show-piece of apparently irrational behaviour." [16]

[edit]Variants

In the “Competitive Ultimatum game” there are many proposers and the responder can accept at most one of their offers: With more than three (naïve) proposers the responder is usually offered almost the entire endowment[17] (which would be the Nash Equilibrium assuming no collusion among proposers).

The “Ultimatum Game with tipping” – if a tip is allowed, from responder back to proposer the game includes a feature of the trust game, and splits tend to be (net) more equitable.[18]

The “Reverse Ultimatum game” gives more power to the responder by giving the proposer the right to offer as many divisions of the endowment as they like. Now the game only ends when the responder accepts an offer or abandons the game, and therefore the proposer tends to receive slightly less than half of the initial endowment.[19]

For a complete review of the ultimatum game in experiments, see "Evolving Economics: Synthesis" by Angela A. Stanton. [20]

Pareto principle

The Pareto principle (also known as the 80-20 rule, the law of the vital few and the principle of factor sparsity) states that, for many events, 80% of the effects come from 20% of the causes. Business management thinker Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who observed that 80% of income in Italy went to 20% of the population. It is a common rule of thumb in business; e.g., "80% of your sales comes from 20% of your clients."

It is worthy of note that some applications of the Pareto principle appeal to a pseudo-scientific "law of nature" to bolster non-quantifiable or non-verifiable assertions that are "painted with a broad brush".[citation needed] The fact that hedges such as the 90/10, 70/30, and 95/5 "rules" exist is sufficient evidence of the non-exactness of the Pareto principle. On the other hand, there is adequate evidence that "clumping" of factors does occur in most phenomena.[citation needed]

The Pareto principle is only tangentially related to Pareto efficiency, which was also introduced by the same economist, Vilfredo Pareto. Pareto developed both concepts in the context of the distribution of income and wealth among the population.

Prisoner's dilemma

The Prisoner's Dilemma constitutes a problem in game theory. It was originally framed by Merrill Flood and Melvin Dresher working atRAND in 1950. Albert W. Tucker formalized the game with prison sentence payoffs and gave it the "Prisoner's Dilemma" name (Poundstone, 1992).

In its "classical" form, the prisoner's dilemma (PD) is presented as follows:

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies ("defects") for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

If we assume that each player prefers shorter sentences to longer ones, and that each gets no utility out of lowering the other player's sentence, and that there are no reputation effects from a player's decision, then the prisoner's dilemma forms a non-zero-sum game in which two players may each "cooperate" with or "defect" from (i.e., betray) the other player. In this game, as in all game theory, the only concern of each individual player ("prisoner") is maximizing his/her own payoff, without any concern for the other player's payoff. The unique equilibrium for this game is a Pareto-suboptimal solution—that is, rational choice leads the two players to both play defect even though each player's individual reward would be greater if they both played cooperatively.

In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible equilibrium for the game is for all players to defect. In simpler terms, no matter what the other player does, one player will always gain a greater payoff by playing defect. Since in any situation playing defect is more beneficial than cooperating, all rational players will play defect, all things being equal.

In the iterated prisoner's dilemma the game is played repeatedly. Thus each player has an opportunity to "punish" the other player for previous non-cooperative play. Cooperation may then arise as an equilibrium outcome. The incentive to defect is overcome by the threat of punishment, leading to the possibility of a cooperative outcome. So if the game is infinitely repeated, cooperation may be a subgame perfect Nash equilibrium although both players defecting always remains an equilibrium and there are many other equilibrium outcomes.

In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria of the classic or iterative games; for instance, those in which two entities could gain important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to coordinate their activities to achieve cooperation.

Contents

 [hide]

[edit]Strategy for the classical prisoner's dilemma

The classical prisoner's dilemma can be summarized thus:

Prisoner B Stays SilentPrisoner B Betrays
Prisoner A Stays SilentEach serves 6 monthsPrisoner A: 10 years
Prisoner B: goes free
Prisoner A BetraysPrisoner A: goes free
Prisoner B: 10 years
Each serves 5 years

In this game, regardless of what the opponent chooses, each player always receives a higher payoff (lesser sentence) by betraying; that is to say that betraying is the strictly dominant strategy. For instance, Prisoner A can accurately say, "No matter what Prisoner B does, I personally am better off betraying than staying silent. Therefore, for my own sake, I should betray." However, if the other player acts similarly, then they both betray and both get a lower payoff than they would get by staying silent. Rational self-interested decisions result in each prisoner's being worse off than if each chose to lessen the sentence of the accomplice at the cost of staying a little longer in jail himself. Hence a seeming dilemma. In game theory, this demonstrates very elegantly that in a non-zero sum game a Nash Equilibrium need not be a Pareto optimum.

[edit]Generalized form

We can expose the skeleton of the game by stripping it of the prisoner framing device. The generalized form of the game has been used frequently in experimental economics. The following rules give a typical realization of the game.

There are two players and a banker. Each player holds a set of two cards: one printed with the word "Cooperate", the other printed with "Defect" (the standard terminology for the game). Each player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other player's selection in advance is eliminated (although revealing one's move does not affect the dominance analysis[1]). At the end of the turn, the banker turns over both cards and gives out the payments accordingly.

If player 1 (red) defects and player 2 (blue) cooperates, player 1 gets the Temptation to Defect payoff of 5 points while player 2 receives the Sucker's payoff of 0 points. If both cooperate they get the Reward for Mutual Cooperation payoff of 3 points each, while if they both defect they get the Punishment for Mutual Defection payoff of 1 point. The checker board payoff matrixshowing the payoffs is given below.

Example PD payoff matrix
CooperateDefect
Cooperate3305
Defect5011

In "win-lose" terminology the table looks like this:

CooperateDefect
Cooperate
win-win
lose much-win much
Defect
win much-lose much
lose-lose

These point assignments are given arbitrarily for illustration. It is possible to generalize them, as follows:

Canonical PD payoff matrix
CooperateDefect
CooperateRRST
DefectTSPP

Where T stands for Temptation to defectR for Reward for mutual cooperationP for Punishment for mutual defection and S for Sucker's payoff. To be defined as Prisoner's dilemma, the following inequalities must hold:

T > R > P > S

This condition ensures that the equilibrium outcome is defection, but that cooperation Pareto dominates equilibrium play. In addition to the above condition, if the game is repeatedly played by two players, the following condition should be added.[2]

R > T + S

If that condition does not hold, then full cooperation is not necessarily Pareto optimal, as the players are collectively better off by having each player alternate between cooperate and defect.

These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of a typical game of Prisoner's Dilemma.

A simple special case occurs when the advantage of defection over cooperation is independent of what the co-player does and cost of the co-players defection is independent of one's own action, i.e. T+S = P+R.

[edit]Human behavior in the Prisoner's Dilemma

One experiment based on the simple dilemma found that approximately 40% of participants played "cooperate" (i.e., stayed silent).[3]

[edit]The iterated prisoner's dilemma

If two players play Prisoner's Dilemma more than once in succession, having memory of at least one previous game, it is called iterated Prisoner's Dilemma. Amongst results shown by Nobel Prize winner Robert Aumann in his 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome. Popular interest in the iterated prisoners dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution of Cooperation (1984). In this he reports on a tournament he organized in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.

Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially purely selfish, by natural selection.

The best deterministic strategy was found to be "Tit for Tat," which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his opponent did on the previous move. Depending on the situation, a slightly better strategy can be "Tit for Tat with forgiveness." When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents.

By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.

Nice
The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely utilitarian reasons first.
Retaliating
However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players.
Forgiving
Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.
Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score more than the opponent).

Therefore, Axelrod reached the oxymoron-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious.

The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the Tit-for-Tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being Tit-for-Tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.

A strategy called Pavlov (an example of Win-Stay, Lose-Switch) cooperates at the first iteration and whenever the player and co-player did the same thing at the previous iteration; Pavlov defects when the player and co-player did different things at the previous iteration. For a certain range of parameters, Pavlov beats all other strategies by giving preferential treatment to co-players which resemble Pavlov.

Deriving the optimal strategy is generally done in two ways:

  1. Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit-for-tat, 50% always cooperate) an optimal counter-strategy can be derived analytically.[4]
  2. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce Tit-for-Tat players (see for instance Chess 1988), but there is no analytic proof that this will always occur.

Although Tit-for-Tat is considered to be the most robust basic strategy, a team from Southampton University in England (led by Professor Nicholas Jennings [2] and consisting of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers, Perukrishnen Vytelingum) introduced a new strategy at the 20th-anniversary Iterated Prisoner's Dilemma competition, which proved to be more successful than Tit-for-Tat. This strategy relied on cooperation between programs to achieve the highest number of points for a single program. The University submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result,[5] this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.

This strategy takes advantage of the fact that multiple entries were allowed in this particular competition, and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing). In a competition where one has control of only a single player, Tit-for-Tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analysing single agent strategies as compared to Axelrod's seminal tournament. However, it provided the framework for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Richard Dawkins in his book The Selfish Gene pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoner's dilemma in that there is no communication allowed between the two players. When the Southampton programs engage in an opening "ten move dance" to recognize one another, this only reinforces just how valuable communication can be in shifting the balance of the game.

If an iterated PD is going to be iterated exactly N times, for some known constant N, then it is always game theoretically optimal to defect in all rounds. The only possible Nash equilibriumis to always defect. The proof goes like this: one might as well defect on the last turn, since the opponent will not have a chance to punish the player. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. For cooperation to emerge between game theoretic rational players, the total number of rounds must be random, or at least unknown to the players. However, even in this case always defect is no longer a strictly dominant strategy, only a Nash equilibrium. The superrational strategy in this case is to cooperate against a superrational opponent, and in the limit of large fixed N, experimental results on strategies agree with the superrational version, not the game-theoretic rational one.

Another odd case is "play forever" prisoner's dilemma. The game is repeated infinitely many times, and the player's score is the average (suitably computed).

The prisoner's dilemma game is fundamental to certain theories of human cooperation and trust. On the assumption that the PD can model transactions between two people requiring trust, cooperative behaviour in populations may be modelled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma has also been referred to as the "Peace-War game".[6]

[edit]Continuous Iterated Prisoner's Dilemma

Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd[7] found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. The basic intuition for this result is straigh tforward: in a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, Tit-for-Tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since Nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of Tit-for-Tat-like cooperation are extremely rare in Nature (ex. Hammerstein[8]) even though Tit-for-Tat seems robust in theoretical models.

[edit]Learning psychology and game theory

Where game players can learn to estimate the likelihood of other players defecting, their own behaviour is influenced by their experience of the others' behaviour. Simple statistics show that inexperienced players are more likely to have had, overall, atypically good or bad interactions with other players. If they act on the basis of these experiences (by defecting or cooperating more than they would otherwise) they are likely to suffer in future transactions. As more experience is accrued a truer impression of the likelihood of defection is gained and game playing becomes more successful. The early transactions experienced by immature players are likely to have a greater effect on their future playing than would such transactions affect mature players. This principle goes part way towards explaining why the formative experiences of young people are so influential and why, for example, those who are particularly vulnerable to bullying sometimes become bullies themselves.

The likelihood of defection in a population may be reduced by the experience of cooperation in earlier games allowing trust to build up.[9] Hence self-sacrificing behaviour may, in some instances, strengthen the moral fibre of a group. If the group is small the positive behaviour is more likely to feed back in a mutually affirming way, encouraging individuals within that group to continue to cooperate. This is allied to the twin dilemma of encouraging those people whom one would aid to indulge in behaviour that might put them at risk. Such processes are major concerns within the study of reciprocal altruismgroup selectionkin selection and moral philosophy.

[edit]Douglas Hofstadter's Superrationality

Douglas Hofstadter in his Metamagical Themas proposed that the definition of "rational" that led "rational" players to defect is faulty. He proposed that there is another type of rational behavior, which he called "superrational", where players take into account that the other person is presumably superrational, like them. Superrational players behave identically, and know that they will behave identically. They take that into account before they maximize their payoffs, and they therefore cooperate.

This view of the one-shot PD leads to cooperation as follows:

  • Any superrational strategy will be the same for both superrational players, since both players will think of it.
  • therefore the superrational answer will lie on the diagonal of the payoff matrix
  • when you maximize return from solutions on the diagonal, you cooperate

However, if a superrational player plays against a rational opponent, he will serve a 10-year sentence, and the rational player will go free.

One-shot cooperation is observed in human culture, wherever religious and ethical codes exist.

Superrationality is not studied by academic economists, as rationality excludes any superrational behavior.

[edit]Morality

While it is sometimes thought that morality must involve the constraint of self-interest, David Gauthier famously argues that co-operating in the prisoners dilemma on moral principles is consistent with self-interest and the axioms of game theory.[citation needed] In his opinion, it is most prudent to give up straigh tforward maximizing and instead adopt a disposition of constrained maximization, according to which one resolves to cooperate in the belief that the opponent will respond with the same choice, while in the classical PD it is explicitly stipulated that the response of the opponent does not depend on the player's choice. This form of contractarianism claims that good moral thinking is just an elevated and subtly strategic version of basic means-end reasoning.

Douglas Hofstadter expresses a strong personal belief that the mathematical symmetry is reinforced by a moral symmetry, along the lines of the Kantian categorical imperative: defecting in the hope that the other player cooperates is morally indefensible.[citation needed] If players treat each other as they would treat themselves, then they will cooperate.

[edit]Real-life examples

These particular examples, involving prisoners and bag switching and so forth, may seem contrived, but there are in fact many examples in human interaction as well as interactions in nature that have the same payoff matrix. The prisoner's dilemma is therefore of interest to the social sciences such as economicspolitics and sociology, as well as to the biological sciences such as ethology and evolutionary biology. Many natural processes have been abstracted into models in which living beings are engaged in endless games of Prisoner's Dilemma (PD). This wide applicability of the PD gives the game its substantial importance.

[edit]In politics

In political science, for instance, the PD scenario is often used to illustrate the problem of two states engaged in an arms race. Both will reason that they have two options, either to increase military expenditure or to make an agreement to reduce weapons. Neither state can be certain that the other one will keep to such an agreement; therefore, they both incline towards military expansion. The paradox is that both states are acting rationally, but producing an apparently irrational result. This could be considered a corollary to deterrence theory.

[edit]In science

In sociology or criminology, the PD may be applied to an actual dilemma facing two inmates. The game theorist Marek Kaminski, a former political prisoner, analysed the factors contributing to payoffs in the game set up by a prosecutor for arrested defendants (cf. References). He concluded that while the PD is the ideal game of a prosecutor, numerous factors may strongly affect the payoffs and potentially change the properties of the game.

In environmental studies, the PD is evident in crises such as global climate change. All countries will benefit from a stable climate, but any single country is often hesitant to curb CO2emissions. The benefit to an individual country to maintain current behavior is greater than the benefit to all countries if behavior was changed, therefore explaining the current impasse concerning climate change.[10]

In program management and technology development, the PD applies to the relationship between the customer and the developer. Capt Dan Ward, an officer in the US Air Force, examinedThe Program Manager's Dilemma in an article published in Defense AT&L, a defense technology journal.[11]

[edit]In sports

PD frequently occurs in cycling races, for instance in the Tour de France. Consider two cyclists halfway in a race, with the peloton (larger group) at great distance behind them. The two riders often work together (mutual cooperation) by sharing the tough load of the front position, where there is no shelter from the wind. If neither of the riders makes an effort to stay ahead, the peloton will soon catch up (mutual defection). An often-seen scenario is one rider doing the hard work alone (cooperating), keeping the two ahead of the peloton. Nearer to the finish (where the threat of the peloton has disappeared), the game becomes a simple zero-sum game, with each rider trying to avoid at all costs giving a slipstream advantage to the other rider. If there was a (single) defecting rider in the preceding prisoners' dilemma, it is usually he who will win this zero-sum game, having saved energy in the cooperating rider's slipstream. The cooperating rider's attitude may seem extremely naive, but he often has no other choice when both riders have different physical profiles. The cooperating rider typically has an enduranceprofile, whereas the defecting rider will more likely be a sprinter. When continuously taking the head position of the twosome, the 'cooperating' rider is merely trying to ride away from the defecting sprinter using his endurance advantage over long distance, thus avoiding a sprint duel at the finish, which he would be bound to lose, even if the sprinting rider had cooperated. Just after the escape from the peloton, the endurance-sprinter difference is less of importance, and it is therefore at this stage of the race that mutual cooperation PD can usually be observed. Arguably, it is this almost unavoidably present of PD (and its transition in zero-sum games) that (unconsciously) makes cycling an exciting sport to watch.

PD hardly applies to running sports, because of the negligible importance of air resistance (and shelter from it).

In high school wrestling, sometimes participants intentionally lose unnaturally large amounts of weight so as to compete against lighter opponents. In doing so, the participants are clearly not at their top level of physical and athletic fitness and yet often end up competing against the same opponents anyway, who have also followed this practice (mutual defection). The result is a reduction in the level of competition. Yet if a participant maintains their natural weight (cooperating), they will most likely compete against a stronger opponent who has lost considerable weight.

[edit]In economics

Advertising is sometimes cited as a real life example of the prisoner’s dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A’s advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period the advertising cancels out, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy and this is not a prisoner's dilemma but rather is an example of a stag hunt. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium. Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the creation of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry.[9] This analysis is likely to be pertinent in many other business situations involving advertising.

Members of a cartel are also involved in a (multi-player) prisonners' dilemma. 'Cooperating' typically means keeping prices at a pre-agreed minimum level. 'Defecting' means selling under this minimum level, instantly stealing business (and profits) from other cartel members. Ironically, anti-trust authorities want potential kartel members to mutually defect, ensuring the lowest possible prices for consumers.

[edit]In law

The theoretical conclusion of PD is one reason why, in many countries, plea bargaining is forbidden. Often, precisely the PD scenario applies: it is in the interest of both suspects to confess and testify against the other prisoner/suspect, even if each is innocent of the alleged crime. Arguably, the worst case is when only one party is guilty — here, the innocent one is unlikely to confess, while the guilty one is likely to confess and testify against the innocent.

[edit]In the media

In the 2008 edition of Big Brother (UK), the dilemma was applied to two of the housemates. A prize fund of £50,000 was available. If housemates chose to share the prize fund, each would receive £25,000. If one chose to share, and the other chose to take, the one who took it would receive the entire £50,000. If both chose to take, both housemates would receive nothing. The housemates had a minute to discuss their decision, and were given the possibility to lie. Both housemates declared they would share the prize fund, but either could have potentially been lying. When asked to give their final answers by big brother, both housemates did indeed choose to share, and so won £25,000 each.

[edit]Multiplayer dilemmas

Many real-life dilemmas involve multiple players. Although metaphorical, Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the PD: Each villager makes a choice for personal gain or restraint. The collective reward for unanimous (or even frequent) defection is very low payoffs (representing the destruction of the "commons"). Such multi-player PDs are not formal as they can always be decomposed into a set of classical two-player games. The commons are not always exploited: William Poundstone, in a book about the Prisoner's Dilemma (see References below), describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for someone to take a paper without paying (defecting) but very few do, feeling that if they do not pay then neither will others, destroying the system.

Because there is no mechanism for personal choice to influence others' decisions, this type of thinking relies on correlations between behavior, not on causation. Because of this property, those who do not understand superrationality often mistake it for magical thinking. Without superrationality, not only petty theft, but voluntary voting requires widespread magical thinking, since a non-voter is a free rider on a democratic system.


[edit]Related games

[edit]Closed-bag exchange

Hofstadter[12] once suggested that people often find problems such as the PD problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":

Two people meet and exchange closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honour the deal by putting into his bag what he agreed, or he can defect by handing over an empty bag.

In this game, defection is always the best course, implying that rational agents will never play. However, in this case both players cooperating and both players defecting actually give the same result, so chances of mutual cooperation, even in repeated games, are few.

[edit]Friend or Foe?

Friend or Foe? is a game show that aired from 2002 to 2005 on the Game Show Network in the United States. It is an example of the prisoner's dilemma game tested by real people, but in an artificial setting. On the game show, three pairs of people compete. As each pair is eliminated, they play a game of Prisoner's Dilemma to determine how their winnings are split. If they both cooperate (Friend), they share the winnings 50-50. If one cooperates and the other defects (Foe), the defector gets all the winnings and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the payoff matrix is slightly different from the standard one given above, as the payouts for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If you know your opponent is going to vote Foe, then your choice does not affect your winnings. In a certain sense, Friend or Foe has a payoff model between "Prisoner's Dilemma" and "Chicken".

The payoff matrix is

CooperateDefect
Cooperate1102
Defect2000

This payoff matrix was later used on the British television programmes Shafted and Golden Balls.

Related Posts with Thumbnails