(references added)
Line 164: Line 164:
  
 
<!-- End Part 2 -->
 
<!-- End Part 2 -->
 +
 +
 +
== References ==
 +
Mixed strategy in penalty shootouts: https://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf
 +
Nash, John. “Non-Cooperative Games.” Annals of Mathematics, vol. 54, no. 2, 1951, pp. 286–95. JSTOR, https://doi.org/10.2307/1969529. Accessed 26 Nov. 2022.
 +
Stanford Encyclopedia of Philosophy, Game Theory: https://plato.stanford.edu/entries/game-theory/#Mot
 +
University of Maryland's ECON414 Game Theory, Simultaneous Games: https://terpconnect.umd.edu/~dvincent/econ414/lec04.pdf

Revision as of 15:23, 26 November 2022

Part 1: Introduction and Terms

What is game theory and where did it come from?

Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.

While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nueman and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.

Definitions

There are many types of games, conditions, and actors that can be studied in game theory, for completeness most if not all of those terms will be described here even if they are not needed for the applications below as they are important from a pedagogical point of view and can help better illustrate the variety of applications game theory can be useful in.

Player - a rational actor looking to maximize his own outcomes in a game (agent, actor, player, etc can all be used interchangeably and mean the same thing)

Game - a set of strategies that actors/players can undertake whose combinations can lead to different possible outcomes

Strategy Space - the total space of all possible pure strategies each player can play the game with, the ways a single player X can play the game is player X’s strategy space

Pure Strategy - a strategy is pure if it provides a definitive move to make in every possible situation

Mixed Strategy - a strategy is mixed when players assign a probability distribution to every possible pure strategy in their strategy space, mixed strategies may be difficult to understand so consider the case of a penalty shootout. The kicker has a dominant foot, so a pure strategy may be to kick with their dominant foot every time, but the goalie may then defend that side more heavily leading to a worse outcome, so by switching from foot to foot they employ a mixed strategy that creates a better outcome. [Switching observed to be the better strategy in real life by Chiappori, Levitt, and Groseclose, (2002)]

Dominant strategy - a strategy that exists for a player when they have one strategy they should always undertake to reach their optimal outcome regardless of other players’ actions

Strategy Profile - a group formed by choosing one strategy for every player

Outcome Space - the total space of all possible outcomes that can be reached in a single game

Simple Game - a game where each player has only two outcomes, winning or losing

Cooperative Games - a game is cooperative when agents can form alliances/coalitions that can not be violated, a game is noncooperative otherwise

Simultaneous Game - a game is simultaneous if players all move at the same time, or they lack knowledge of previous moves making them effectively simultaneous. If a game is not simultaneous it is considered sequential.

Symmetric Game - a symmetric game is a game in which the outcomes are determined solely based on the strategies employed, not on who is playing them. In other words, all the players are identical, an example of this type is the prisoner's dilemma which we will showcase as our prime introductory example

(In)finite Game - games that can be finished in finitely many moves are considered finite, games that can go on forever with no player being able to achieve a winning or losing strategy can be considered infinite. The games we will discuss are finite, as there is a lack of mathematically rigorous infinite games to study, however an interested reader may want to examine a potential, not so rigorous, foreign policy application of the infinite game here (https://youtu.be/0bFs6ZiynSU)

Best response correspondence - is the choice, or response, a player makes to maximize their own outcome given another player’s actions. Notation: in this project we’ll represent the better response of player i to an opposing strategy X with BRi(X)

Nash Equilibrium - a strategy profile where no player benefits from altering their chosen strategy, essentially it is a “best case” or “solution” to a noncooperative game. John Nash proved in 1951 that at least one such equilibrium must exist for all games with a finite set of actions.

We can represent games with tables and matrices where each dimension represents a player and the corresponding cells represent the expected outcomes for each combination of individual players’ strategies. We can use these tables to quickly find equilibria and make best response calculations/curves. We will use representations such as these in the coming examples. Now that we've covered all the background, we'll start to make the block of definitions more concrete and interesting with the classic example of game theory, the prisoner's dilemma.

Nash Equilibrium and the Prisoners dilemma


Part 2: Cournot Competition

Nash Equilibrium in Economics In the game of economy, we are interested in the strategic interactions between players in a market/industry, and how the strategy of a player is affected by the existence of another player. In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."

To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette. Suppose Boilermaker Clockery is the only player in the market, denoted by $ BC $, and $ S_{BC} $ is the set of its possible strategies. In this case, the quantity of wall clocks produced is the only independent variable we take into consideration (we will consider price as a function of quantity), so we will use $ q \in S_{BC} $ for it. Suppose that the potential demand of wall clocks in West Lafayette is $ 20{,}000 $, and we will use a naive model

$ D(p)=20{,}000-200p $

for the demand, where $ p $ is the price of a clock and each dollar of increment in price would lead to $ 200 $ less clocks in demand. Suppose the total production cost is

$ C(q)=5q $

i.e., $ c=5 $ dollars for each clock produced, and Boilermaker Clockery will always produce enough clocks to match the demand as long as it's profitable,

$ \begin{align} q=&D(p)\\ =&20{,}000-200p \end{align} $

we will have the following function for the market unit price for each clock,

$ p(q)=\frac{20{,}000-q}{200} $

and the total revenue would be

$ \begin{align} R(q)=&p(q) \cdot q\\ =&\frac{(20{,}000-q) \cdot q}{200} \end{align} $

Since this is a monopoly, Boilermaker Clockery can pick the best strategy by maximizing the total profit

$ \begin{align} P(q)=&R(q)-C(q)\\ =&\frac{20{,}000q}{200}-\frac{q^2}{200}-5q \end{align} $

To do so, we need to know how total profit respond to change in quantity of clocks produced, a.k.a. the marginal profit,

$ \begin{align} MP(q)=&\frac{d}{dq}P(q)\\ =&\frac{20{,}000}{200}-\frac{2q}{200}-5\\ =&95-\frac{q}{100} \end{align} $

The total profit is maximized when

$ \begin{align} MP(q)=&95-\frac{q}{{100}}=0\\ q=&9500 \end{align} $

i.e., increasing production no longer generate more profit. It is obvious that the extremum is a maximum in this case, but in general, we can determine by computing and showing that the second derivative is negative,

$ \frac{d^2}{dq^2}P(q)=-\frac2{200} $

The unit price would be

$ \begin{align} p(9500)=&\frac{20{,}000-9500}{200}\\ =&52.5 \end{align} $

The maximum total profit would be

$ \begin{align} P(9500)=&9500\cdot\left(p(9500)-c\right)\\ =&9500\cdot(52.5-5)\\ =&451{,}250 \end{align} $

However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market. There are now two clock producers in the market, denoted by the set $ \{BC, IU\} $, and $ \forall i \in \{BC, IU\} $, let $ S_i $ be the set of strategies that

......


References

Mixed strategy in penalty shootouts: https://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf Nash, John. “Non-Cooperative Games.” Annals of Mathematics, vol. 54, no. 2, 1951, pp. 286–95. JSTOR, https://doi.org/10.2307/1969529. Accessed 26 Nov. 2022. Stanford Encyclopedia of Philosophy, Game Theory: https://plato.stanford.edu/entries/game-theory/#Mot University of Maryland's ECON414 Game Theory, Simultaneous Games: https://terpconnect.umd.edu/~dvincent/econ414/lec04.pdf

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood