(edited part 4, added summary, adjusted references)
 
(4 intermediate revisions by one other user not shown)
Line 4: Line 4:
 
Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.
 
Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.
  
While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nueman and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.
+
While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nuemann and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.
  
 
'''Key Definitions'''
 
'''Key Definitions'''
Line 50: Line 50:
 
</div>
 
</div>
  
Trivially, we can see that both prisoners staying quiet is the mutually ideal solution. However, we can show that the Nash equilibrium is when both prisoners snitch. If A stays quiet, he runs the risk of being imprisoned for 10 years, so it benefits him to switch to snitching, where he will either go free or serve 5 years. The same goes for B, so neither benefits from staying quiet. Sohe equilibrium is not the ideal solution, since they could both serve 2 years but without knowledge of the other prisoner’s intentions they can never choose to stay quiet. In game theory we say this situation is not ''Pareto efficient''. An equilibrium is considered Pareto efficient or Pareto optimal when the Nash equilibrium is the ideal outcome.  
+
Trivially, we can see that both prisoners staying quiet is the mutually ideal solution. However, we can show that the Nash equilibrium is when both prisoners snitch. If A stays quiet, he runs the risk of being imprisoned for 10 years, so it benefits him to switch to snitching, where he will either go free or serve 5 years. The same goes for B, so neither benefits from staying quiet. So the equilibrium is not the ideal solution, since they could both serve 2 years but without knowledge of the other prisoner’s intentions they can never choose to stay quiet. In game theory we say this situation is not ''Pareto efficient''. An equilibrium is considered Pareto efficient or Pareto optimal when the Nash equilibrium is the ideal outcome.  
  
 
Now that we’ve seen a trivial example to familiarize ourselves with the concepts, the following non-trivial applications should become more clear.  
 
Now that we’ve seen a trivial example to familiarize ourselves with the concepts, the following non-trivial applications should become more clear.  
Line 65: Line 65:
 
In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."
 
In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."
  
<div style="display: flex; justify-content: center;">
 
<div style="margin-right: 20px">
 
 
To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette.
 
To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette.
  
Suppose Boilermaker Clockery is the only player in the market, denoted by <math>BC</math>, and <math>S_{BC}</math> is the set of its possible strategies. In this case, the quantity of wall clocks produced is the only independent variable we take into consideration (we will consider price as a function of quantity), so we will use <math>q \in S_{BC}=\mathbb{R}_+</math> for it.
+
<div style="display: flex; justify-content: center;">
 +
<div style="margin-right: 20px">
 +
Suppose Boilermaker Clockery is the only player in the market, denoted by <math>BC</math>. Wall clocks are the only type of product the firm produces and retails. The rule of the game is to maximize the total profit. In this case, the strategy space for a player is all possible quantities of wall clocks they could produce (we will consider price as a function of quantity), so we will use <math>q \in S_{BC}=\mathbb{Z}_+</math> for quantity.
 
</div>
 
</div>
 
[[File:BoilermakerClockery.jpg|256px|thumbnail|center]]
 
[[File:BoilermakerClockery.jpg|256px|thumbnail|center]]
Line 182: Line 182:
 
However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market.
 
However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market.
  
There are now two clock producers in the market, denoted by the set <math>\{BC, IU\}</math>, and <math>\forall i \in \{BC, IU\}</math>, <math>q_i \in S_i=\mathbb{R}_+</math> denote the quantity of the player.
+
There are now two clock producers in the market, denoted by the set <math>\{BC, IU\}</math>, and <math>\forall i \in \{BC, IU\}</math>, <math>q_i \in S_i=\mathbb{Z}_+</math> denote the quantity of the player.
 
</div>
 
</div>
 
[[File:IUHorologyInc.jpg|256px|thumbnail|center]]
 
[[File:IUHorologyInc.jpg|256px|thumbnail|center]]
Line 192: Line 192:
  
 
<div style="display: flex; justify-content: center;">
 
<div style="display: flex; justify-content: center;">
<math>u_i(s_i, s_j)=s_i\cdot\left(p(s_i+s_j)-c\right)</math>
+
<math>P_i(q_i, q_j)=q_i\cdot\left(p(q_i+q_j)-c\right)</math>
 
</div>
 
</div>
  
the payoff function of player <math>i</math> that takes the strategies of both players as parameters. A Nash Equilibrium is reached when the payoff functions both players are maximized by picking the best strategy <math>s^*</math>. This is achieved by finding the best response function of a player to the opponent's strategy <math>BR_i(s_j)</math> and vice versa.
+
the payoff function of player <math>i</math> that takes the strategies of both players as parameters. A Nash Equilibrium is reached when the payoff functions both players are maximized by picking the best strategy <math>s^*</math>. This is achieved by finding the best response function of a player to the opponent's strategy <math>BR_i(q_j)</math> and vice versa.
  
 
Let's first find the maximum of Boilermaker Clockery's payoff,
 
Let's first find the maximum of Boilermaker Clockery's payoff,
Line 202: Line 202:
 
<math>
 
<math>
 
\begin{align}
 
\begin{align}
     u_{BC}(q_{BC}, q_{IU})
+
     P_{BC}(q_{BC}, q_{IU})
 
     =&q_{BC}\cdot\left(\frac{20{,}000-(q_{BC}+q_{IU})}{200}-5\right)\\
 
     =&q_{BC}\cdot\left(\frac{20{,}000-(q_{BC}+q_{IU})}{200}-5\right)\\
 
     =&\frac{20{,}000q_{BC}}{200}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200}-5q_{BC}\\
 
     =&\frac{20{,}000q_{BC}}{200}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200}-5q_{BC}\\
Line 215: Line 215:
 
<math>
 
<math>
 
\begin{align}
 
\begin{align}
     \frac{\partial}{\partial q_{BC}}u_{BC}(q_{BC}, q_{IU})
+
     \frac{\partial}{\partial q_{BC}}P_{BC}(q_{BC}, q_{IU})
 
     =&95-\frac{q_{BC}}{100}-\frac{q_{IU}}{200}\\
 
     =&95-\frac{q_{BC}}{100}-\frac{q_{IU}}{200}\\
     \frac{\partial^2}{\partial q_{BC}^2}u_{BC}(q_{BC}, q_{IU})
+
     \frac{\partial^2}{\partial q_{BC}^2}P_{BC}(q_{BC}, q_{IU})
 
     =&-\frac1{100}<0
 
     =&-\frac1{100}<0
 
\end{align}
 
\end{align}
Line 228: Line 228:
 
<math>
 
<math>
 
\begin{align}
 
\begin{align}
     \frac{\partial}{\partial q_{BC}}u_{BC}\left(BR_{BC}(q_{IU}), q_{IU}\right)
+
     \frac{\partial}{\partial q_{BC}}P_{BC}\left(BR_{BC}(q_{IU}), q_{IU}\right)
 
     =&95-\frac{BR_{BC}(q_{IU})}{100}-\frac{q_{IU}}{200}
 
     =&95-\frac{BR_{BC}(q_{IU})}{100}-\frac{q_{IU}}{200}
 
     =0\\
 
     =0\\
Line 235: Line 235:
 
         \left.100(95-\frac{q_{IU}}{200})
 
         \left.100(95-\frac{q_{IU}}{200})
 
         =9500-\frac{q_{IU}}2\right|_{q_{IU}\leq19{,}000}\\
 
         =9500-\frac{q_{IU}}2\right|_{q_{IU}\leq19{,}000}\\
         \left.0\right|_{q_{IU}>19{,}000}\because q_{BC}\in\mathbb{R_+}
+
         \left.0\right|_{q_{IU}>19{,}000}\because q_{BC}\in\mathbb{Z_+}
 
     \end{cases}
 
     \end{cases}
 
\end{align}
 
\end{align}
Line 268: Line 268:
 
     q^*_{BC}
 
     q^*_{BC}
 
     =&\frac{19{,}000}3\\
 
     =&\frac{19{,}000}3\\
 +
    \approx&6333\\
 
\end{align}
 
\end{align}
 
</math>
 
</math>
Line 277: Line 278:
 
<math>
 
<math>
 
     q^*_{IU}
 
     q^*_{IU}
     =\frac{19{,}000}3\\
+
     \approx6333\\
 
</math>
 
</math>
 
</div>
 
</div>
  
Both firms ended up producing more products than what they could have produced to maximize their profit if they do not respond to the opponent. The total profit made by the two firms, is less than the profit made by Boilermaker Clockery before IU Horology Inc. entered the competition.
+
Both firms ended up producing more products than what they could have produced to maximize their profit if they do not respond to the opponent (half the value in the previous monopoly). The profit is less because the market unit price is lowered so that all the extra clocks produced can be sold.
  
 
<div style="display: flex; justify-content: center;">
 
<div style="display: flex; justify-content: center;">
<math>P(q^*_{BC}+q^*_{IU})\approx401{,}111.11<451{,}250</math>
+
<math>
 +
\begin{align}
 +
    p(q_{total})
 +
    =&p(q^*_{BC}+q^*_{IU})\\
 +
    =&\frac{20{,}000-q^*_{BC}-q^*_{IU}}{200}\\
 +
    =&\frac{20{,}000-2\times6333}{200}\\
 +
    =&36.67\\
 +
\end{align}
 +
</math>
 +
</div>
 +
 
 +
The revenue for each of the two firm would be
 +
 
 +
<div style="display: flex; justify-content: center;">
 +
<math>
 +
\begin{align}
 +
    R(q^*_{BC})
 +
    =R(q^*_{IU})
 +
    =&R(6333)\\
 +
    =&36.67 \cdot 6333\\
 +
    =&232231.11\\
 +
\end{align}
 +
</math>
 +
</div>
 +
 
 +
and the profit would be
 +
 
 +
<div style="display: flex; justify-content: center;">
 +
<math>
 +
\begin{align}
 +
    P(q^*_{BC})
 +
    =P(q^*_{IU})
 +
    =&P(6333)\\
 +
    =&R(6333)-C(6333)\\
 +
    =&232231.11-5\times6333\\
 +
    =&200566.11\\
 +
\end{align}
 +
</math>
 +
</div>
 +
 
 +
Obviously, Boilermaker Clockery cannot make nearly as much as what it makes during its monopoly, <math>451{,}250</math> dollars. More importantly, the total profit made by the two firms, is still less than the profit made by Boilermaker Clockery before IU Horology Inc. entered the competition.
 +
 
 +
<div style="display: flex; justify-content: center;">
 +
<math>P(q^*_{BC})+P(q^*_{IU})=401{,}132.22<451{,}250</math>
 
</div>
 
</div>
  
Line 292: Line 336:
 
== Part 3: Game Theory and Evolution ==
 
== Part 3: Game Theory and Evolution ==
  
Game theory has had surprisingly many applications in the fields of biology, especially concerning evolution and the strategies which species employ. While species do not necessarily choose which strategies they adopt, the process of evolution results in those species adopting varied strategies. In the game of evolution, the fitness of a species determines its payoffs (the cost of a strategy subtracted from its benefits), and the goal is to slowly optimize a strategy to the best fitness possible.
+
Game theory has had surprisingly many applications in the fields of biology, especially concerning evolution and the strategies which species employ. While species do not necessarily choose which strategies they adopt, the process of evolution results in those species adopting varied strategies through mutation. The strategy space of a population is all possible phenotypes that could result from mutation. The game is symmetric since the same strategies are theoretically available to every population. In the game of evolution, the fitness of a phenotype determines its payoffs (the cost of a strategy subtracted from its benefits), and the goal is to slowly optimize a strategy to the best fitness possible.
  
 
To demonstrate the principles of evolutionary game theory, we will utilize lions as an example. The fitness of a lion is dependent on its efficiency of obtaining food and the number of offspring produced. When hunting, lions exert energy, which is a cost, in order to obtain food to regain energy, which is a benefit. Subtracting the cost of hunting with the benefit of food provides the profit of hunting. Maximizing this profit allows lions to exert their energy in other ways, such as defending from attackers or supporting offspring.
 
To demonstrate the principles of evolutionary game theory, we will utilize lions as an example. The fitness of a lion is dependent on its efficiency of obtaining food and the number of offspring produced. When hunting, lions exert energy, which is a cost, in order to obtain food to regain energy, which is a benefit. Subtracting the cost of hunting with the benefit of food provides the profit of hunting. Maximizing this profit allows lions to exert their energy in other ways, such as defending from attackers or supporting offspring.
  
Lions are an example of an evolutionarily stable strategy, meaning that any small change in strategy will reduce fitness in some way. A group of lions may benefit from becoming larger by increasing strength relative to other lions, allowing for resolution of territorial disputes, but larger animals consume more energy and this strategy may increase the cost more than increasing the benefit, meaning this change in strategy reduces profit. Another group may benefit from becoming smaller by decreasing the energy spent relative to other lions, allowing for greater energy efficiency, but smaller lions may be less successful at hunting and this strategy may decrease the benefit more than decreasing the cost, again reducing profit.  
+
Lions are an example of an evolutionarily stable strategy, meaning that any small change in strategy will reduce fitness in some way. A group of lions may benefit from becoming larger by increasing strength relative to other lions, allowing for resolution of territorial disputes, but larger animals consume more energy and this strategy may increase the cost more than increasing the benefit, meaning this change in strategy reduces profit. Another group may benefit from becoming smaller by decreasing the energy spent relative to other lions, allowing for greater energy efficiency, but smaller lions may be less successful at hunting and this strategy may decrease the benefit more than decreasing the cost, again reducing profit. So since no competing population's phenotype can invade, or take over, the population of regular lions, they represent the evolutionarily stable strategy.
  
 
<div style="display: flex; justify-content: center;">
 
<div style="display: flex; justify-content: center;">
Line 302: Line 346:
 
</div>
 
</div>
  
When all species in an environment possess an evolutionarily stable strategy, it forms a Nash equilibrium, since any change in strategy for any one player (species) will be purely detrimental, so changes in strategy do not stick around. Changes in strategy do still occur, since mutation is random, not by choice, but these changes lose out in competition with the evolutionarily stable strategy. Stable does not necessarily mean the strategy is impermeable, since changes in the environment can necessitate evolution, but so long as everything else around an evolutionarily stable species stays the same, that species will not adjust its strategy.
+
The employment of an evolutionarily stable phenotype is a Nash equilibrium, since any change in strategy for any one player (population) will be purely detrimental, so changes in strategy do not stick around. Changes in strategy do still occur, since mutation is random, but these changes lose out in competition with the evolutionarily stable strategy. Stable does not necessarily mean the strategy is impermeable, since changes in the environment can necessitate evolution, but so long as everything else around an evolutionarily stable species stays the same, that species will not adjust its strategy (mutations will not win out).
 
+
The Hawk-Dove Game, or Game of Chicken, is a variant of the Prisoner's Dilemma with slightly different flavoring. In this game, both players want to take some resource V. If a player chooses to be a Hawk, they fight for control of the resource; if a player chooses to be a Dove, they aim for a peaceful resolution. If two doves meet, they share equally, each receiving V/2. If a hawk and a dove meet, the hawk scares away the dove, so the hawk gets V while the dove gets 0. If two hawks meet, they fight over the resource, so their gains are reduced by some value C corresponding to the cost of fighting; therefore, they each get (V-C)/2 on average.
+
  
Territorial behaviors can be explained using the hawk-dove game. Assume we have two male lions, Alex and Bob. If Alex and Bob both act as doves, they share food peacefully. If Alex changes his strategy to become a hawk, Alex now gains all the food in the area, and Bob must find some other area to hunt in. Alex's change in strategy is strictly better than before, meaning his strategy is stable for now. Bob can increase his individual gain by also becoming a hawk, if (V-C)/2 > 0. However, if Bob stayed a dove it would increase the total resources for the lion species, since becoming a hawk decreases the total gain to V - C.  
+
The Hawk-Dove Game, or Game of Chicken, is an application of the prisoner's dilemma. In this game, both players want to take some resource V. If a player chooses to be a Hawk, they fight for control of the resource; if a player chooses to be a Dove, they aim for a peaceful resolution. If two doves meet, they share equally, each receiving V/2. If a hawk and a dove meet, the hawk scares away the dove, so the hawk gets V while the dove gets 0. If two hawks meet, they fight over the resource, so their gains are reduced by some value C corresponding to the cost of fighting; therefore, they each get (V-C)/2 on average.
  
 
<div style="display: flex; justify-content: center;">
 
<div style="display: flex; justify-content: center;">
Line 312: Line 354:
 
</div>
 
</div>
  
This results in the emergence of territorial behavior in more aggressive species due to territorial species gaining a higher payoff. If two species A and B both engage in Hawk-Dove games for territorial control, they may end up with strategies S(A) and S(B). Both A and B seek to maximize their own gains whenever possible. Species A engages in territorial behavior, so the Nash equilibrium remains at one hawk, one dove, and S(A) has a species profit of V. Species B engages in selfish behavior, so the Nash equilibrium is two hawks as both species members avoid receiving nothing, and S(B) has a species profit of V - C. Because a higher profit means more energy can be spent towards reproduction, S(A) leads to more offspring than S(B), and thus S(A) is the greater strategy. Over time, this strategy leads species A to outperform species B, leading more dominant species to adopt territorial behaviors to increase their efficiency.
+
This results in the emergence of territorial behavior in certain species due to territorial populations gaining a higher payoff over thousands of years. If two populations A and B both engage in Hawk-Dove games for territorial control, both A and B seek to maximize their own gains whenever possible. We see that the Nash Equilibrium is at Hawk-Hawk since regardless what player B picks, player A should always pick Hawk. If B picked Dove, then A gets V as opposed to V/2, and if B picked Hawk, then A has to pick Hawk otherwise they are left with nothing. So clearly neither player can benefit from switching off of Hawk, and they are not allowed to cooperate and pick Dove-Dove. Hence we see the equilibria is not Pareto Efficient as it was in the Prisoner's Dilemma. This is always the case as long as the resource benefit V is strictly greater than the cost of fighting, C. If C >= V the game breaks down because their is no longer a pure strategy equilibria as A should pick Hawk if B picked Dove but A should pick Dove if B picked Hawk.
  
 
<!-- End Part 3 -->
 
<!-- End Part 3 -->
  
  
== Part 4: Game Theory and Elections ==
+
== Part 4: Societal Implications and Summary ==
  
'''Motivation'''
+
We have seen just 2 of the many useful applications of game theory in the real world. We've shown a surprising variety of fields spanning the sciences, economics, and sociology/political science that can make use of facts from a theory that presents as an unassuming economic tool. This revelation, helps show that game theory can be and is used as more than a tool for businesses to maximize their profits with, but rather as a general axiomatic system for decision making. At its core, human decision making is nothing more than analysis of tradeoffs and outcomes, a process that can be formalized if you leverage game theory.
  
In addition to our economic and evolutionary examples, we'll show a final instance of applied game theory to political science. The Downsian Model of Electoral competition, first set forth in Anthony Downs's 1957 book, "An Economic Theory of Democracy," seeks to use game theory to give axiomatic proofs of certain claims about elections and political system. This treatise marked one of the first well-accredited applications of game theory to non-economic situations and opened the door for the mainstream use of game theory in sociology and political science.
+
Despite its extreme usefulness, we must remember that whenever we simplify complex societal problems down to a form which we can neatly define into outcome spaces, payoff matrices, and equilibria, we make choices about what information to cut out. The very basis of game theory assumes that all players are rational actors, and we often ignore the totally irrational biases, opinions, and feelings that play a role in human decision making. As mathematicians, economists, and political scientists we must be careful to remember that societal problems are often far more complex than they seem and take care to ensure we don't marginalize groups of real world actors who don't fit nicely into our models. Take for example, our Downsian model of electoral competition where we assume our voters to be rational and care about important facets of the candidates' campaign. It has been documented by real world studies that voters often care about more than just what is considered "rational". The colors candidates wear, their age, their family life, their speaking cadence, etc. can all be factors which severely impact certain voters opinions and which are not included in our analysis and can lead to skewed results. Now, this does not mean that our modeling is not important, as it does lead us to vital conclusions about the importance of reducing systemic barriers to voting and political information. However, as with any modeling, it is necessary to take into account its limitations when we use it as a tool.  
 
+
'''Basic Logic'''
+
 
+
In the Downsian model, all players are said to be rational, which means all players' decision on elections are determined by what expected outcome would benefit him/her more.
+
 
+
For example, let's say a player would choose between candidate A (incumbent) and candidate B for voting:
+
 
+
Before voting, potential voters have a decision that needs to be made.
+
 
+
Case 1. To vote, it costs some some amount of effort, like knowing what kind of people the candidates are,  gathering information about their opinions and stances on key issues, and logistical challenges like how to get to the Polling Place and if it requires vacation days to vote. The players (voters) measure these costs and compare them with what they believe the expected outcomes of voting are. If it benefits them more to spend the effort and vote, then the player would be more willing to participate in elections, and vice versa.
+
 
+
Let's say for our purposes that the player chooses to vote.
+
 
+
Case 2. If the player likes A, he votes for A. But if the player likes neither, then the player votes for the candidate he dislikes the least.
+
 
+
Case 3. Under the same circumstances, if changes to B's rule the player thinks he would gain more, the player would vote for B. If A tends to make the player thinks he would gain more, the player would vote for A.
+
 
+
Case 4. If A and B tends to make the player thinks he would gain the same, the player would compare their strategy. The player would forecast based on what the party had done and votes for the better outcome he thinks would be.
+
 
+
'''Voter Turnout'''
+
 
+
Based on the Downsian Model we've described, there could be 6 kinds of turnouts.
+
 
+
1. The bigger the population, the less the turnout is from a percentage standpoint.
+
 
+
2. The more one-sided the election is perceived to be, the less the turnout is.
+
 
+
3. The less important the election is, the less the turnout is.
+
 
+
4. The closer the views of the candidates are, the less the turnout is.
+
 
+
5. The higher the cost of gathering information, the less the turnout is.
+
 
+
6. The more player's participation and sense of responsibility, the higher the turn out is.
+
 
+
So we see that in order to increase the equilibrium participation rate, we can lower the size of voting districts, foster competitive elections (Downs argues we can do this by moving away from 2 party systems), run clearly distinct candidates, make news and political information easily accessible, and try to create communal senses of civic responsibility. This theoretical analysis matches closely with what our intuitions about voting are, it makes sense that people would be more likely to vote if they know more about the candidates and feel like their vote matters. Thus we can show certain facts about electoral design using a mathematical framework backed up by theory and intuition.  
+
  
 
<!-- End Part 4 -->
 
<!-- End Part 4 -->
 
== Part 5: Societal Implications and Summary ==
 
 
We have seen just 3 of the many useful applications of game theory in the real world. We've shown a surprising variety of fields spanning the sciences, economics, and sociology/political science that can make use of facts from a theory that presents as an unassuming economic tool. This revelation, helps show that game theory can be and is used as more than a tool for businesses to maximize their profits with, but rather as a general axiomatic system for decision making. At its core, human decision making is nothing more than analysis of tradeoffs and outcomes, a process that can be formalized if you leverage game theory.
 
 
Despite its extreme usefulness, we must remember that whenever we simplify complex societal problems down to a form which we can neatly define into outcome spaces, payoff matrices, and equilibria, we make choices about what information to cut out. The very basis of game theory assumes that all players are rational actors, and we often ignore the totally irrational biases, opinions, and feelings that play a role in human decision making. As mathematicians, economists, and political scientists we must be careful to remember that societal problems are often far more complex than they seem and take care to ensure we don't marginalize groups of real world actors who don't fit nicely into our models. Take for example, our Downsian model of electoral competition where we assume our voters to be rational and care about important facets of the candidates' campaign. It has been documented by real world studies that voters often care about more than just what is considered "rational". The colors candidates where, their age, their family life, their speaking cadence, etc. can all be factors which severely impact certain voters opinions and which are not included in our analysis and can lead to skewed results. Now, this does not mean that our modeling is not important, as it does lead us to vital conclusions about the importance of reducing systemic barriers to voting and political information. However, as with any modeling, it is necessary to take into account its limitations when we use it as a tool.
 
<! -- End Part 5 -->
 
  
 
== References ==
 
== References ==

Latest revision as of 07:25, 6 December 2022

Part 1: Introduction and Terms

What is game theory and where did it come from?

Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.

While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nuemann and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.

Key Definitions

There are many types of games, conditions, and actors that can be studied in game theory, for completeness most if not all of those terms will be described here even if they are not needed for the applications below as they are important from a pedagogical point of view and can help better illustrate the variety of applications game theory can be useful in.

Player - a rational actor looking to maximize his own outcomes in a game (agent, actor, player, etc can all be used interchangeably and mean the same thing)

Game - a set of strategies that actors/players can undertake whose combinations can lead to different possible outcomes

Strategy Space - the total space of all possible pure strategies each player can play the game with, the ways a single player X can play the game is player X’s strategy space (denoted S(X) or SX interchangeably).

Pure Strategy - a strategy is pure if it provides a definitive move to make in every possible situation

Mixed Strategy - a strategy is mixed when players assign a probability distribution to every possible pure strategy in their strategy space, mixed strategies may be difficult to understand so consider the case of a penalty shootout. The kicker has a dominant foot, so a pure strategy may be to kick with their dominant foot every time, but the goalie may then defend that side more heavily leading to a worse outcome, so by switching from foot to foot they employ a mixed strategy that creates a better outcome. [Switching observed to be the better strategy in real life by Chiappori, Levitt, and Groseclose, (2002)]

Dominant strategy - a strategy that exists for a player when they have one strategy they should always undertake to reach their optimal outcome regardless of other players’ actions

Strategy Profile - a group formed by choosing one strategy for every player

Outcome Space - the total space of all possible outcomes that can be reached in a single game

Simple Game - a game where each player has only two outcomes, winning or losing

Cooperative Games - a game is cooperative when agents can form alliances/coalitions that can not be violated, a game is noncooperative otherwise

Simultaneous Game - a game is simultaneous if players all move at the same time, or they lack knowledge of previous moves making them effectively simultaneous. If a game is not simultaneous it is considered sequential.

Symmetric Game - a symmetric game is a game in which the outcomes are determined solely based on the strategies employed, not on who is playing them. In other words, all the players are identical, an example of this type is the prisoner's dilemma which we will showcase as our prime introductory example

(In)finite Game - games that can be finished in finitely many moves are considered finite, games that can go on forever with no player being able to achieve a winning or losing strategy can be considered infinite. The games we will discuss are finite, as there is a lack of mathematically rigorous infinite games to study, however an interested reader may want to examine a potential, not so rigorous, foreign policy application of the infinite game here (https://youtu.be/0bFs6ZiynSU)

Best response correspondence - is the choice, or response, a player makes to maximize their own outcome given another player’s actions. Notation: in this project we’ll represent the better response of player i to an opposing strategy X with BRi(X)

Nash Equilibrium - a strategy profile where no player benefits from altering their chosen strategy, essentially it is a “solution” to a noncooperative game. John Nash proved in 1951 that at least one such equilibrium must exist for all games with a finite set of actions.

We can represent games with tables and matrices where each dimension represents a player and the corresponding cells represent the expected outcomes for each combination of individual players’ strategies. We can use these tables to quickly find equilibria and make best response calculations/curves. We will use representations such as these in the coming examples. Now that we've covered all the background, we'll start to make the block of definitions more concrete and interesting with the classic example of game theory, the prisoner's dilemma.

Nash Equilibrium and the Prisoner's dilemma

A strong and popular example of game theory applied to decision making is the age-old “Prisoner’s Dilemma.” In this thought experiment, the authorities have arrested two criminals, we’ll call them prisoners A and B, and are holding them in separate rooms where they cannot speak to each other or receive any indication of each other’s intended decisions. From this description and our definition above we note this game is simultaneous, as neither player is informed of the other’s actions. Next, the authorities realize they don’t have enough hard evidence to convict either prisoner so they have to rely on a confession. The prosecutor offers A a deal, if he snitches on B, he will go free if B stays silent, or serve 5 years if B also snitches on A. The same deal is offered to B. If both stay quiet they each get 2 years on a lowered charge the prosecutors can prove. Note that in this game we impose the condition that staying quiet on the basis of honor or loyalty is irrational. The outcomes are represented in the following table.

Outcome Space for Prisoner's Dilemma

Trivially, we can see that both prisoners staying quiet is the mutually ideal solution. However, we can show that the Nash equilibrium is when both prisoners snitch. If A stays quiet, he runs the risk of being imprisoned for 10 years, so it benefits him to switch to snitching, where he will either go free or serve 5 years. The same goes for B, so neither benefits from staying quiet. So the equilibrium is not the ideal solution, since they could both serve 2 years but without knowledge of the other prisoner’s intentions they can never choose to stay quiet. In game theory we say this situation is not Pareto efficient. An equilibrium is considered Pareto efficient or Pareto optimal when the Nash equilibrium is the ideal outcome.

Now that we’ve seen a trivial example to familiarize ourselves with the concepts, the following non-trivial applications should become more clear.


Part 2: Cournot Competition

Nash Equilibrium in Economics

In the game of economy, we are interested in the strategic interactions between players in a market/industry, and how the strategy of a player is affected by the existence of another player.

In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."

To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette.

Suppose Boilermaker Clockery is the only player in the market, denoted by $ BC $. Wall clocks are the only type of product the firm produces and retails. The rule of the game is to maximize the total profit. In this case, the strategy space for a player is all possible quantities of wall clocks they could produce (we will consider price as a function of quantity), so we will use $ q \in S_{BC}=\mathbb{Z}_+ $ for quantity.

BoilermakerClockery.jpg

Suppose that the potential demand of wall clocks in West Lafayette is $ 20{,}000 $, and we will use a naive model

$ D(p)=20{,}000-200p $

for the demand, where $ p $ is the price of a clock and each dollar of increment in price would lead to $ 200 $ less clocks in demand. Suppose the total production cost is

$ C(q)=5q $

i.e., $ c=5 $ dollars for each clock produced, and Boilermaker Clockery will always produce enough clocks to match the demand as long as it's profitable,

$ \begin{align} q=&D(p)\\ =&20{,}000-200p \end{align} $

we will have the following function for the market unit price for each clock,

$ p(q)=\frac{20{,}000-q}{200} $

and the total revenue would be

$ \begin{align} R(q)=&p(q) \cdot q\\ =&\frac{(20{,}000-q) \cdot q}{200} \end{align} $

Since this is a monopoly, Boilermaker Clockery can pick the best strategy by maximizing the total profit

$ \begin{align} P(q)=&R(q)-C(q)\\ =&\frac{20{,}000q}{200}-\frac{q^2}{200}-5q \end{align} $

To do so, we need to know how total profit respond to change in quantity of clocks produced, a.k.a. the marginal profit,

$ \begin{align} MP(q)=&\frac{d}{dq}P(q)\\ =&\frac{20{,}000}{200}-\frac{2q}{200}-5\\ =&95-\frac{q}{100} \end{align} $

The total profit is maximized when

$ \begin{align} MP(q)=&95-\frac{q}{{100}}=0\\ q=&9500 \end{align} $

i.e., increasing production no longer generate more profit. It is obvious that the extremum is a maximum in this case, but in general, we can determine by computing and showing that the second derivative is negative,

$ \frac{d^2}{dq^2}P(q)=-\frac2{200} $

The unit price would be

$ \begin{align} p(9500)=&\frac{20{,}000-9500}{200}\\ =&52.5 \end{align} $

The maximum total profit would be

$ \begin{align} P(9500)=&9500\cdot\left(p(9500)-c\right)\\ =&9500\cdot(52.5-5)\\ =&451{,}250 \end{align} $

However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market.

There are now two clock producers in the market, denoted by the set $ \{BC, IU\} $, and $ \forall i \in \{BC, IU\} $, $ q_i \in S_i=\mathbb{Z}_+ $ denote the quantity of the player.

IUHorologyInc.jpg

This time, the two firms cannot pick the ideal strategy (act as if they are a single firm for the aforementioned total maximum profit). Since they are by no means forming a cartel, they have no choice, but to respond to the opponent's strategy and therefore, destined to reach a Nash Equilibrium.

To understand how the two players interact with each other, we now have

$ P_i(q_i, q_j)=q_i\cdot\left(p(q_i+q_j)-c\right) $

the payoff function of player $ i $ that takes the strategies of both players as parameters. A Nash Equilibrium is reached when the payoff functions both players are maximized by picking the best strategy $ s^* $. This is achieved by finding the best response function of a player to the opponent's strategy $ BR_i(q_j) $ and vice versa.

Let's first find the maximum of Boilermaker Clockery's payoff,

$ \begin{align} P_{BC}(q_{BC}, q_{IU}) =&q_{BC}\cdot\left(\frac{20{,}000-(q_{BC}+q_{IU})}{200}-5\right)\\ =&\frac{20{,}000q_{BC}}{200}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200}-5q_{BC}\\ =&95q_{BC}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200} \end{align} $

By computing the partial derivatives of it,

$ \begin{align} \frac{\partial}{\partial q_{BC}}P_{BC}(q_{BC}, q_{IU}) =&95-\frac{q_{BC}}{100}-\frac{q_{IU}}{200}\\ \frac{\partial^2}{\partial q_{BC}^2}P_{BC}(q_{BC}, q_{IU}) =&-\frac1{100}<0 \end{align} $

and showing the second partial derivative is negative, we know the maximum is at the point where the first partial derivative equals $ 0 $. Simply rewriting this equation gives us the best response function,

$ \begin{align} \frac{\partial}{\partial q_{BC}}P_{BC}\left(BR_{BC}(q_{IU}), q_{IU}\right) =&95-\frac{BR_{BC}(q_{IU})}{100}-\frac{q_{IU}}{200} =0\\ BR_{BC}(q_{IU}) =&\begin{cases} \left.100(95-\frac{q_{IU}}{200}) =9500-\frac{q_{IU}}2\right|_{q_{IU}\leq19{,}000}\\ \left.0\right|_{q_{IU}>19{,}000}\because q_{BC}\in\mathbb{Z_+} \end{cases} \end{align} $

From exactly the same process, we have the best response function for IU Horology Inc.,

$ BR_{IU}(q_{BC}) =\begin{cases} \left.9500-\frac{q_{BC}}2\right|_{q_{BC}\leq19{,}000}\\ \left.0\right|_{q_{BC}>19{,}000} \end{cases} $

Finally, we compute the Nash Equilibrium, where the strategies of both firms are the best response to each other. We only care about the quadrant where both $ q_{BC} $ and $ q_{IU} $ are positive.

$ \begin{align} q^*_{BC} =&BR_{BC}(q^*_{IU})\\ =&BR_{BC}\left(BR_{IU}(q^*_{BC})\right)\\ =&9500-\frac{BR_{IU}(q^*_{BC})}2\\ =&9500-\frac{9500-\frac{q^*_{BC}}2}2\\ =&\frac{9500}2+\frac{q^*_{BC}}4\\ \frac{3q^*_{BC}}4 =&\frac{9500}2\\ q^*_{BC} =&\frac{19{,}000}3\\ \approx&6333\\ \end{align} $

Similarly, we have

$ q^*_{IU} \approx6333\\ $

Both firms ended up producing more products than what they could have produced to maximize their profit if they do not respond to the opponent (half the value in the previous monopoly). The profit is less because the market unit price is lowered so that all the extra clocks produced can be sold.

$ \begin{align} p(q_{total}) =&p(q^*_{BC}+q^*_{IU})\\ =&\frac{20{,}000-q^*_{BC}-q^*_{IU}}{200}\\ =&\frac{20{,}000-2\times6333}{200}\\ =&36.67\\ \end{align} $

The revenue for each of the two firm would be

$ \begin{align} R(q^*_{BC}) =R(q^*_{IU}) =&R(6333)\\ =&36.67 \cdot 6333\\ =&232231.11\\ \end{align} $

and the profit would be

$ \begin{align} P(q^*_{BC}) =P(q^*_{IU}) =&P(6333)\\ =&R(6333)-C(6333)\\ =&232231.11-5\times6333\\ =&200566.11\\ \end{align} $

Obviously, Boilermaker Clockery cannot make nearly as much as what it makes during its monopoly, $ 451{,}250 $ dollars. More importantly, the total profit made by the two firms, is still less than the profit made by Boilermaker Clockery before IU Horology Inc. entered the competition.

$ P(q^*_{BC})+P(q^*_{IU})=401{,}132.22<451{,}250 $


Part 3: Game Theory and Evolution

Game theory has had surprisingly many applications in the fields of biology, especially concerning evolution and the strategies which species employ. While species do not necessarily choose which strategies they adopt, the process of evolution results in those species adopting varied strategies through mutation. The strategy space of a population is all possible phenotypes that could result from mutation. The game is symmetric since the same strategies are theoretically available to every population. In the game of evolution, the fitness of a phenotype determines its payoffs (the cost of a strategy subtracted from its benefits), and the goal is to slowly optimize a strategy to the best fitness possible.

To demonstrate the principles of evolutionary game theory, we will utilize lions as an example. The fitness of a lion is dependent on its efficiency of obtaining food and the number of offspring produced. When hunting, lions exert energy, which is a cost, in order to obtain food to regain energy, which is a benefit. Subtracting the cost of hunting with the benefit of food provides the profit of hunting. Maximizing this profit allows lions to exert their energy in other ways, such as defending from attackers or supporting offspring.

Lions are an example of an evolutionarily stable strategy, meaning that any small change in strategy will reduce fitness in some way. A group of lions may benefit from becoming larger by increasing strength relative to other lions, allowing for resolution of territorial disputes, but larger animals consume more energy and this strategy may increase the cost more than increasing the benefit, meaning this change in strategy reduces profit. Another group may benefit from becoming smaller by decreasing the energy spent relative to other lions, allowing for greater energy efficiency, but smaller lions may be less successful at hunting and this strategy may decrease the benefit more than decreasing the cost, again reducing profit. So since no competing population's phenotype can invade, or take over, the population of regular lions, they represent the evolutionarily stable strategy.

Potential Profits of Different Sized Species

The employment of an evolutionarily stable phenotype is a Nash equilibrium, since any change in strategy for any one player (population) will be purely detrimental, so changes in strategy do not stick around. Changes in strategy do still occur, since mutation is random, but these changes lose out in competition with the evolutionarily stable strategy. Stable does not necessarily mean the strategy is impermeable, since changes in the environment can necessitate evolution, but so long as everything else around an evolutionarily stable species stays the same, that species will not adjust its strategy (mutations will not win out).

The Hawk-Dove Game, or Game of Chicken, is an application of the prisoner's dilemma. In this game, both players want to take some resource V. If a player chooses to be a Hawk, they fight for control of the resource; if a player chooses to be a Dove, they aim for a peaceful resolution. If two doves meet, they share equally, each receiving V/2. If a hawk and a dove meet, the hawk scares away the dove, so the hawk gets V while the dove gets 0. If two hawks meet, they fight over the resource, so their gains are reduced by some value C corresponding to the cost of fighting; therefore, they each get (V-C)/2 on average.

Outcome Space for Hawk-Dove Game

This results in the emergence of territorial behavior in certain species due to territorial populations gaining a higher payoff over thousands of years. If two populations A and B both engage in Hawk-Dove games for territorial control, both A and B seek to maximize their own gains whenever possible. We see that the Nash Equilibrium is at Hawk-Hawk since regardless what player B picks, player A should always pick Hawk. If B picked Dove, then A gets V as opposed to V/2, and if B picked Hawk, then A has to pick Hawk otherwise they are left with nothing. So clearly neither player can benefit from switching off of Hawk, and they are not allowed to cooperate and pick Dove-Dove. Hence we see the equilibria is not Pareto Efficient as it was in the Prisoner's Dilemma. This is always the case as long as the resource benefit V is strictly greater than the cost of fighting, C. If C >= V the game breaks down because their is no longer a pure strategy equilibria as A should pick Hawk if B picked Dove but A should pick Dove if B picked Hawk.


Part 4: Societal Implications and Summary

We have seen just 2 of the many useful applications of game theory in the real world. We've shown a surprising variety of fields spanning the sciences, economics, and sociology/political science that can make use of facts from a theory that presents as an unassuming economic tool. This revelation, helps show that game theory can be and is used as more than a tool for businesses to maximize their profits with, but rather as a general axiomatic system for decision making. At its core, human decision making is nothing more than analysis of tradeoffs and outcomes, a process that can be formalized if you leverage game theory.

Despite its extreme usefulness, we must remember that whenever we simplify complex societal problems down to a form which we can neatly define into outcome spaces, payoff matrices, and equilibria, we make choices about what information to cut out. The very basis of game theory assumes that all players are rational actors, and we often ignore the totally irrational biases, opinions, and feelings that play a role in human decision making. As mathematicians, economists, and political scientists we must be careful to remember that societal problems are often far more complex than they seem and take care to ensure we don't marginalize groups of real world actors who don't fit nicely into our models. Take for example, our Downsian model of electoral competition where we assume our voters to be rational and care about important facets of the candidates' campaign. It has been documented by real world studies that voters often care about more than just what is considered "rational". The colors candidates wear, their age, their family life, their speaking cadence, etc. can all be factors which severely impact certain voters opinions and which are not included in our analysis and can lead to skewed results. Now, this does not mean that our modeling is not important, as it does lead us to vital conclusions about the importance of reducing systemic barriers to voting and political information. However, as with any modeling, it is necessary to take into account its limitations when we use it as a tool.


References

Adam Brown, Summary of Downs: https://adambrown.info/p/notes/downs_an_economic_theory_of_democracy

Downs, Anthony. “An Economic Theory of Political Action in a Democracy.” Journal of Political Economy, vol. 65, no. 2, 1957, pp. 135–50. JSTOR, http://www.jstor.org/stable/1827369. Accessed 28 Nov. 2022.

Easley, Kleinberg, Networks, Crowds, and Markets: http://www.cs.cornell.edu/home/kleinber/networks-book/

European Economic Review, A Theory of Dynamic Oligopoly (Cournot Competition): https://scholar.harvard.edu/files/maskin/files/corrigendum_oligopoly_iii_eer.pdf

Hansen, Stephen, et al. “The Downsian Model of Electoral Participation: Formal Theory and Empirical Analysis of the Constituency Size Effect.” Public Choice, vol. 52, no. 1, 1987, pp. 15–33. JSTOR, http://www.jstor.org/stable/30024703. Accessed 28 Nov. 2022.

Khan Academy, Prisoner's Dilemma: https://www.khanacademy.org/economics-finance-domain/ap-microeconomics/imperfect-competition/oligopoly-and-game-theory/v/more-on-nash-equilibrium

MBA, Downsian Model: https://wiki.mbalib.com/wiki/%E5%94%90%E6%96%AF%E6%A8%A1%E5%9E%8B

Mixed strategy in penalty shootouts: https://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf

Nash, John. “Non-Cooperative Games.” Annals of Mathematics, vol. 54, no. 2, 1951, pp. 286–95. JSTOR, https://doi.org/10.2307/1969529. Accessed 26 Nov. 2022.

Organization for Economic Co-operation and Development, Nash Equilibrium: https://stats.oecd.org/glossary/detail.asp?ID=3151

Stanford Encyclopedia of Philosophy, Game Theory: https://plato.stanford.edu/entries/game-theory/#Mot

Stanford Encyclopedia of Philosophy, Evolutionary Game Theory: https://plato.stanford.edu/entries/game-evolutionary/

University of Illinois's CS440 Artificial Intelligence, Prisoner's Dilemma: https://courses.engr.illinois.edu/ece448/sp2020/slides/lec35.pdf

University of Maryland's ECON414 Game Theory, Simultaneous Games: https://terpconnect.umd.edu/~dvincent/econ414/lec04.pdf

Wikipedia, Prisoner's Dilemma Table: https://en.wikipedia.org/wiki/Prisoner%27s_dilemma

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood