Last time I wrote about Game Theory, I explained the basic idea of

zero sum games. In their simplest form, a game can be described by a payoff matrix,where each dimension of the matrix is the set of *strategies* which can be selected by one player, and each entry in the matrix describes the payoffs that result when all players select their strategy. Informally, a game is zero-sum when the total payoff is zero - that is, when any positive payout to one player is balanced by payouts from other players.

We can formalize some of the basic ideas more precisely. What we can say is that each move results in a *state*; and for each player y, there is a function, u_{p}, called a utility function, which maps from a state to a utility value. Taking the player utility functions as components, we can define the game's utility function in terms of tuples: for players 1..n,

the utility function maps from states to tuples of utility values: u(state)=(u_{1}, ..., u_{n}).

With the idea of the utility function in place, we can formally describe what it means to be a zero-sum game. A game is zero-sum if and only if for all game-ending states s, Σ_{p∈Players}u_{p}(s) = 0.

One of the tricky things to understand is that for what we, informally, think of as a game, there are multiple utility functions that can used to study the game, and using different utility functions produce *different games* according to game theory.

For an example that's come up in comments to the last post, think

of a soccer game. One possible utility function for looking at a game

of soccer is simple win/lose. That is, for team t, the utility function

assigns -1 to losing, 0 to a tie, and 1 for a win. That's a perfectly reasonable utility function - and with it, soccer is a zero-sum game. But we can also measure goals: the utility of a game for a team is the number of goals that that team scored. We could also create an economic measure, where the utility of the game is the price-per-goal - the total cost of all of the players, divided by the number of goals. Or we could analyze the game in terms of players: for a goalie, the utility could be the ratio of scored goals

to total shots on goal.

To "solve" a game, you need to find the strategy that maximizes your utility. In a single-move game, there are often not any clear solutions. One

of the common ones is the maximin solution. Maximin is good at producing a reasonable result in the face of absolutely no knowledge of what your opponent is going to do. So you've got to assume that no matter what strategy you choose, your opponent will chose the strategy with the worst possible result for you. So you want to select the strategy with the best possible worst outcome - the strategy with the maximum minimum utility.

For example, consider the following game:

A | B | C | |
---|---|---|---|

1 | 10 | -5 | 2 |

2 | 2 | -3 | 4 |

3 | -10 | 4 | -2 |

In this, player 1 selects from strategies 1, 2, and 3; player 2 selects

from A, B, and C. Positive utility are payoffs to player 1, and negatives to player 2.

Looking at this, what's the best strategy for player 1? In strategy

1, his worst outcome is losing 5. In strategy 2, his worst outcome is losing three. And in strategy 3, his worst outcome is losing 10. So following the maximin solution, his best strategy is number 2, and the value of the

maximin is -3.

What about player 2? For her, the best outcome is the worst outcome for player 1. So she plays the *minimax* solution: she wants to find the

solution that produces the *minimum maximum* solution. The minimax is the same basic idea as the maximin; in fact, if you just reverse the sign of all of the utility values, then the maximin of the sign-reverse utility is the minimax of the original.

So, player two looks for the maximin. For strategy A, her worst outcome is losing 10. For strategy B, her worst outcome is losing 4. And for strategy C, her worst outcome is losing 4. So a first look with maximin says that B and C are tied - so she falls back to the second-worst outcome. In B, her second-worst outcome is winning 4; in C, her second-worst outcome is winning losing 2. So her best minimax strategy is B, and the minimax is 4.

Very rarely, there's a kind of optimal solution to a game. It's called a

*saddle point*. In a game with a saddle point, the minimax and the

maximin are equal. This is optimal, because it's a solution where both players

agree that the specific outcome is the best outcome that they could count on.

For example, look at the game below:

A | B | C | |
---|---|---|---|

1 | 10 | 5 | 2 |

2 | -2 | 0 | 1 |

3 | 8 | 3 | -4 |

For player 1, strategy 1 has minimum 2; strategy 2 has minimum 0, and

strategy 3 has minimum -4. So the maximin is strategy 1, with a value of 2.

For player 2, strategy A has a maximum of 10; strategy B has a maximum of 4; and strategy C has a maximum of 2. So the minimax strategy is strategy C,

with a value of 2.

So (1,C) is a saddlepoint, and players one and two are in agreement about

what the best possible outcome of the game is.

Hi Mark,

I was surprised to learn that Pareto distribution used by John Nash to descibe Noncooperative games and equilibria is related to the Dirac delta of engineering and physics.

Wikipedia on Pareto distribution has a graph and text demonstrating this relation.

The graph plots k=1,2,3,oo.

Infinity corresponds to the Dirac delta, continuous time, which in turn corresponds to the Kronecker delta, discrete time.

Numerous links are provided.