Revision as of 01:47, 28 November 2022 by Student23MA279F22 (Talk | contribs)

Part 1: Introduction and Terms

What is game theory and where did it come from?

Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.

While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nueman and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.

Key Definitions

There are many types of games, conditions, and actors that can be studied in game theory, for completeness most if not all of those terms will be described here even if they are not needed for the applications below as they are important from a pedagogical point of view and can help better illustrate the variety of applications game theory can be useful in.

Player - a rational actor looking to maximize his own outcomes in a game (agent, actor, player, etc can all be used interchangeably and mean the same thing)

Game - a set of strategies that actors/players can undertake whose combinations can lead to different possible outcomes

Strategy Space - the total space of all possible pure strategies each player can play the game with, the ways a single player X can play the game is player X’s strategy space

Pure Strategy - a strategy is pure if it provides a definitive move to make in every possible situation

Mixed Strategy - a strategy is mixed when players assign a probability distribution to every possible pure strategy in their strategy space, mixed strategies may be difficult to understand so consider the case of a penalty shootout. The kicker has a dominant foot, so a pure strategy may be to kick with their dominant foot every time, but the goalie may then defend that side more heavily leading to a worse outcome, so by switching from foot to foot they employ a mixed strategy that creates a better outcome. [Switching observed to be the better strategy in real life by Chiappori, Levitt, and Groseclose, (2002)]

Dominant strategy - a strategy that exists for a player when they have one strategy they should always undertake to reach their optimal outcome regardless of other players’ actions

Strategy Profile - a group formed by choosing one strategy for every player

Outcome Space - the total space of all possible outcomes that can be reached in a single game

Simple Game - a game where each player has only two outcomes, winning or losing

Cooperative Games - a game is cooperative when agents can form alliances/coalitions that can not be violated, a game is noncooperative otherwise

Simultaneous Game - a game is simultaneous if players all move at the same time, or they lack knowledge of previous moves making them effectively simultaneous. If a game is not simultaneous it is considered sequential.

Symmetric Game - a symmetric game is a game in which the outcomes are determined solely based on the strategies employed, not on who is playing them. In other words, all the players are identical, an example of this type is the prisoner's dilemma which we will showcase as our prime introductory example

(In)finite Game - games that can be finished in finitely many moves are considered finite, games that can go on forever with no player being able to achieve a winning or losing strategy can be considered infinite. The games we will discuss are finite, as there is a lack of mathematically rigorous infinite games to study, however an interested reader may want to examine a potential, not so rigorous, foreign policy application of the infinite game here (https://youtu.be/0bFs6ZiynSU)

Best response correspondence - is the choice, or response, a player makes to maximize their own outcome given another player’s actions. Notation: in this project we’ll represent the better response of player i to an opposing strategy X with BRi(X)

Nash Equilibrium - a strategy profile where no player benefits from altering their chosen strategy, essentially it is a “solution” to a noncooperative game. John Nash proved in 1951 that at least one such equilibrium must exist for all games with a finite set of actions.

We can represent games with tables and matrices where each dimension represents a player and the corresponding cells represent the expected outcomes for each combination of individual players’ strategies. We can use these tables to quickly find equilibria and make best response calculations/curves. We will use representations such as these in the coming examples. Now that we've covered all the background, we'll start to make the block of definitions more concrete and interesting with the classic example of game theory, the prisoner's dilemma.

Nash Equilibrium and the Prisoner's dilemma

A strong and popular example of game theory applied to decision making is the age-old “Prisoner’s Dilemma.” In this thought experiment, the authorities have arrested two criminals, we’ll call them prisoners A and B, and are holding them in separate rooms where they cannot speak to each other or receive any indication of each other’s intended decisions. From this description and our definition above we note this game is simultaneous, as neither player is informed of the other’s actions. Next, the authorities realize they don’t have enough hard evidence to convict either prisoner so they have to rely on a confession. The prosecutor offers A a deal, if he snitches on B, he will go free if B stays silent, or serve 5 years if B also snitches on A. The same deal is offered to B. If both stay quiet they each get 2 years on a lowered charge the prosecutors can prove. Note that in this game we impose the condition that staying quiet on the basis of honor or loyalty is irrational. The outcomes are represented in the following table.

Outcome Space for Prisoner's Dilemma

Trivially, we can see that both prisoners staying quiet is the mutually ideal solution. However, we can show that the Nash equilibrium is when both prisoners snitch. If A stays quiet, he runs the risk of being imprisoned for 10 years, so it benefits him to switch to snitching, where he will either go free or serve 5 years. The same goes for B, so neither benefits from staying quiet. Sohe equilibrium is not the ideal solution, since they could both serve 2 years but without knowledge of the other prisoner’s intentions they can never choose to stay quiet. In game theory we say this situation is not Pareto efficient. An equilibrium is considered Pareto efficient or Pareto optimal when the Nash equilibrium is the ideal outcome.

Now that we’ve seen a trivial example to familiarize ourselves with the concepts, the following non-trivial applications should become more clear.


Part 2: Cournot Competition

Nash Equilibrium in Economics In the game of economy, we are interested in the strategic interactions between players in a market/industry, and how the strategy of a player is affected by the existence of another player. In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."

To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette. Suppose Boilermaker Clockery is the only player in the market, denoted by $ BC $, and $ S_{BC} $ is the set of its possible strategies. In this case, the quantity of wall clocks produced is the only independent variable we take into consideration (we will consider price as a function of quantity), so we will use $ q \in S_{BC}=\mathbb{R}_+ $ for it. Suppose that the potential demand of wall clocks in West Lafayette is $ 20{,}000 $, and we will use a naive model

$ D(p)=20{,}000-200p $

for the demand, where $ p $ is the price of a clock and each dollar of increment in price would lead to $ 200 $ less clocks in demand. Suppose the total production cost is

$ C(q)=5q $

i.e., $ c=5 $ dollars for each clock produced, and Boilermaker Clockery will always produce enough clocks to match the demand as long as it's profitable,

$ \begin{align} q=&D(p)\\ =&20{,}000-200p \end{align} $

we will have the following function for the market unit price for each clock,

$ p(q)=\frac{20{,}000-q}{200} $

and the total revenue would be

$ \begin{align} R(q)=&p(q) \cdot q\\ =&\frac{(20{,}000-q) \cdot q}{200} \end{align} $

Since this is a monopoly, Boilermaker Clockery can pick the best strategy by maximizing the total profit

$ \begin{align} P(q)=&R(q)-C(q)\\ =&\frac{20{,}000q}{200}-\frac{q^2}{200}-5q \end{align} $

To do so, we need to know how total profit respond to change in quantity of clocks produced, a.k.a. the marginal profit,

$ \begin{align} MP(q)=&\frac{d}{dq}P(q)\\ =&\frac{20{,}000}{200}-\frac{2q}{200}-5\\ =&95-\frac{q}{100} \end{align} $

The total profit is maximized when

$ \begin{align} MP(q)=&95-\frac{q}{{100}}=0\\ q=&9500 \end{align} $

i.e., increasing production no longer generate more profit. It is obvious that the extremum is a maximum in this case, but in general, we can determine by computing and showing that the second derivative is negative,

$ \frac{d^2}{dq^2}P(q)=-\frac2{200} $

The unit price would be

$ \begin{align} p(9500)=&\frac{20{,}000-9500}{200}\\ =&52.5 \end{align} $

The maximum total profit would be

$ \begin{align} P(9500)=&9500\cdot\left(p(9500)-c\right)\\ =&9500\cdot(52.5-5)\\ =&451{,}250 \end{align} $

However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market. There are now two clock producers in the market, denoted by the set $ \{BC, IU\} $, and $ \forall i \in \{BC, IU\} $, $ q_i \in S_i=\mathbb{R}_+ $ denote the quantity of the player.

This time, the two firms cannot pick the ideal strategy (act as if they are a single firm for the aforementioned total maximum profit). Since they are by no means forming a cartel, they have no choice, but to respond to the opponent's strategy and therefore, destined to reach a Nash Equilibrium.

To understand how the two players interact with each other, we now have

$ u_i(s_i, s_j)=s_i\cdot\left(p(s_i+s_j)-c\right) $

the payoff function of player $ i $ that takes the strategies of both players as parameters. A Nash Equilibrium is reached when the payoff functions both players are maximized by picking the best strategy $ s^* $. This is achieved by finding the best response function of a player to the opponent's strategy $ BR_i(s_j) $ and vice versa.

Let's first find the maximum of Boilermaker Clockery's payoff,

$ \begin{align} u_{BC}(q_{BC}, q_{IU}) =&q_{BC}\cdot\left(\frac{20{,}000-(q_{BC}+q_{IU})}{200}-5\right)\\ =&\frac{20{,}000q_{BC}}{200}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200}-5q_{BC}\\ =&95q_{BC}-\frac{q_{BC}^2}{200}-\frac{q_{BC}q_{IU}}{200} \end{align} $

By computing the partial derivatives of it,

$ \begin{align} \frac{\partial}{\partial q_{BC}}u_{BC}(q_{BC}, q_{IU}) =&95-\frac{q_{BC}}{100}-\frac{q_{IU}}{200}\\ \frac{\partial^2}{\partial q_{BC}^2}u_{BC}(q_{BC}, q_{IU}) =&-\frac1{100}<0 \end{align} $

and showing the second partial derivative is negative, we know the maximum is at the point where the first partial derivative equals $ 0 $. Simply rewriting this equation gives us the best response function,

$ \begin{align} \frac{\partial}{\partial q_{BC}}u_{BC}\left(BR_{BC}(q_{IU}), q_{IU}\right) =&95-\frac{BR_{BC}(q_{IU})}{100}-\frac{q_{IU}}{200} =0\\ BR_{BC}(q_{IU}) =&\begin{cases} \left.100(95-\frac{q_{IU}}{200}) =9500-\frac{q_{IU}}2\right|_{q_{IU}\leq19{,}000}\\ \left.0\right|_{q_{IU}>19{,}000}\because q_{BC}\in\mathbb{R_+} \end{cases} \end{align} $

From exactly the same process, we have the best response function for IU Horology Inc.,

$ BR_{IU}(q_{BC}) =\begin{cases} \left.9500-\frac{q_{BC}}2\right|_{q_{BC}\leq19{,}000}\\ \left.0\right|_{q_{BC}>19{,}000} \end{cases} $

Finally, we compute the Nash Equilibrium, where the strategies of both firms are the best response to each other. We only care about the quadrant where both $ q_{BC} $ and $ q_{IU} $ are positive.

$ \begin{align} q^*_{BC} =&BR_{BC}(q^*_{IU})\\ =&BR_{BC}\left(BR_{IU}(q^*_{BC})\right)\\ =&9500-\frac{BR_{IU}(q^*_{BC})}2\\ =&9500-\frac{9500-\frac{q^*_{BC}}2}2\\ =&\frac{9500}2+\frac{q^*_{BC}}4\\ \frac{3q^*_{BC}}4 =&\frac{9500}2\\ q^*_{BC} =&\frac{19{,}000}3\\ \end{align} $

Similarly, we have

$ q^*_{IU} =\frac{19{,}000}3\\ $

Both firms ended up producing more products than what they could have produced to maximize their profit if they do not respond to the opponent. The total profit made by the two firms, is less than the profit made by Boilermaker Clockery before IU Horology Inc. entered the competition.

$ P(q^*_{BC}+q^*_{IU})\approx401{,}111.11<451{,}250 $


Part 3: Game Theory and Evolution

Game theory has had surprisingly many applications in the fields of biology, especially concerning evolution and the strategies which species employ. While species do not necessarily choose which strategies they adopt, the process of evolution results in those species adopting varied strategies. In the game of evolution, the fitness of a species determines its payoffs (the cost of a strategy subtracted from its benefits), and the goal is to slowly optimize a strategy to the best fitness possible.

To demonstrate the principles of evolutionary game theory, we will utilize lions as an example. The fitness of a lion is dependent on its efficiency of obtaining food and the number of offspring produced. When hunting, lions exert energy, which is a cost, in order to obtain food to regain energy, which is a benefit. Subtracting the cost of hunting with the benefit of food provides the profit of hunting. Maximizing this profit allows lions to exert their energy in other ways, such as defending from attackers or supporting offspring.

Lions are an example of an evolutionarily stable strategy, meaning that any small change in strategy will reduce fitness in some way. A group of lions may benefit from becoming larger by increasing strength relative to other lions, allowing for resolution of territorial disputes, but larger animals consume more energy and this strategy may increase the cost more than increasing the benefit, meaning this change in strategy reduces profit. Another group may benefit from becoming smaller by decreasing the energy spent relative to other lions, allowing for greater energy efficiency, but smaller lions may be less successful at hunting and this strategy may decrease the benefit more than decreasing the cost, again reducing profit.

Potential Profits of Different Sized Species

When all species in an environment possess an evolutionarily stable strategy, it forms a Nash equilibrium, since any change in strategy for any one player (species) will be purely detrimental, so changes in strategy do not stick around. Changes in strategy do still occur, since mutation is random, not by choice, but these changes lose out in competition with the evolutionarily stable strategy. Stable does not necessarily mean the strategy is impermeable, since changes in the environment can necessitate evolution, but so long as everything else around an evolutionarily stable species stays the same, that species will not adjust its strategy.

The Hawk-Dove Game, or Game of Chicken, is a variant of the Prisoner's Dilemma with slightly different flavoring. In this game, both players want to take some resource V. If a player chooses to be a Hawk, they fight for control of the resource; if a player chooses to be a Dove, they aim for a peaceful resolution. If two doves meet, they share equally, each receiving V/2. If a hawk and a dove meet, the hawk scares away the dove, so the hawk gets V while the dove gets 0. If two hawks meet, they fight over the resource, so their gains are reduced by some value C corresponding to the cost of fighting; therefore, they each get (V-C)/2 on average.

Territorial behaviors can be explained using the hawk-dove game. Assume we have two male lions, Alex and Bob. If Alex and Bob both act as doves, they share food peacefully. If Alex changes his strategy to become a hawk, Alex now gains all the food in the area, and Bob must find some other area to hunt in. Alex's change in strategy is strictly better than before, meaning his strategy is stable for now. Bob can increase his individual gain by also becoming a hawk, if (V-C)/2 > 0. However, if Bob stayed a dove it would increase the total resources for the lion species, since becoming a hawk decreases the total gain to V - C.

Outcome Space for Hawk-Dove Game

This results in the emergence of territorial behavior in more aggressive species due to territorial species gaining a higher payoff. If two species A and B both engage in Hawk-Dove games for territorial control, they may end up with strategies S(A) and S(B). Both A and B seek to maximize their own gains whenever possible. Species A engages in territorial behavior, so the Nash equilibrium remains at one hawk, one dove, and S(A) has a species profit of V. Species B engages in selfish behavior, so the Nash equilibrium is two hawks as both species members avoid receiving nothing, and S(B) has a species profit of V - C. Because a higher profit means more energy can be spent towards reproduction, S(A) leads to more offspring than S(B), and thus S(A) is the greater strategy. Over time, this strategy leads species A to outperform species B, leading more dominant species to adopt territorial behaviors to increase their efficiency.


Part 4: Game Theory and Elections

Game theory has widely use among human society. One example would be Downsian model of electoral competition.

Basic Logic

In Downsian model, all players and set to be rational, which means all players' decision on elections would be determined by what his expected outcome would benefit him more.

For example, let's say a player would choose A (incumbent) and B for voting:

Before vote, there's a decision need to be make.

Case 1. To vote, it would cost some efforts like knowing what kind of person the player is choosing from, gathering information about their opinion and so on, and things like how to get to the Polling Place and it may takes vacation days to vote. The player would measure these costs and compare these with what it gives. If it benefits more, then the player would be more willing to vote, vice versa.

Let's say the player would vote.

Case 2. If the player likes A, he would vote for A. But if the player likes neither, then the player would vote for the one he dislikes less.

Case 3. Under the same circumstances, if changes to B's rule the player thinks he would gain more, the player would vote for B. If A tends to make the player thinks he would gain more, the player would vote for A. (Adam)

Case 4. If A and B tends to make the player thinks he would gain the same, the player would compare their strategy. The player would forecast based on what the party had done and votes for the better outcome he thinks would be. (Adam)

Voter Turnout

Based on Downsian Model, there would be 6 kind of turnouts. (MBA)

1. The more voter, the less the turnout is.

2. The more voter on one side, the less the turnout is.

3. The less important the election is, the less the turnout is.

4. The closer the view of candidates is, the less the turnout is.

5. The higher the cost of gathering information, the less the turnout is.

6. The more player's participation and sense of responsibility, the higher the turn out is.


References

Easley, Kleinberg, Networks, Crowds, and Markets: http://www.cs.cornell.edu/home/kleinber/networks-book/

Khan Academy, Prisoner's Dilemma: https://www.khanacademy.org/economics-finance-domain/ap-microeconomics/imperfect-competition/oligopoly-and-game-theory/v/more-on-nash-equilibrium

Mixed strategy in penalty shootouts: https://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf

Nash, John. “Non-Cooperative Games.” Annals of Mathematics, vol. 54, no. 2, 1951, pp. 286–95. JSTOR, https://doi.org/10.2307/1969529. Accessed 26 Nov. 2022.

Stanford Encyclopedia of Philosophy, Game Theory: https://plato.stanford.edu/entries/game-theory/#Mot

Stanford Encyclopedia of Philosophy, Evolutionary Game Theory: https://plato.stanford.edu/entries/game-evolutionary/

University of Illinois's CS440 Artificial Intelligence, Prisoner's Dilemma: https://courses.engr.illinois.edu/ece448/sp2020/slides/lec35.pdf

University of Maryland's ECON414 Game Theory, Simultaneous Games: https://terpconnect.umd.edu/~dvincent/econ414/lec04.pdf

Wikipedia, Prisoner's Dilemma Table: https://en.wikipedia.org/wiki/Prisoner%27s_dilemma

MBA, Downsian Model: https://wiki.mbalib.com/wiki/%E5%94%90%E6%96%AF%E6%A8%A1%E5%9E%8B

Adam Brown, Summary of Downs: https://adambrown.info/p/notes/downs_an_economic_theory_of_democracy

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn