Tuesday, May 10, 2016

Published 6:49 PM by with 0 comment

How Often Should The Better Team Win A 'Best Of N Games' Series?

Is 7 games really enough to determine the best team?

The way these series work is that the same two teams play each other until one wins more than half the total number...e.g., 'best of 3' means the first team with two wins is the winner, 'best of 7' means the first team with four wins is the winner, and so on. Where do we start?

We have to translate the problem into something that we can calculate. Let's start with a simple use case.

Best Of 1

In that case, if team A has a per-game win probability of 60%, then they should win 60% of the time. This was a stupid use case.

Best Of 3

There are 6 possible outcomes with two teams (A means team A wins and B means team B wins):
• AA
• ABA
• ABB
• BAB
• BAA
• BB
It's a bit trickier than you might have naively assumed (e.g., a first guess would be that there were 8 outcomes since that's 2^3, but since the series stops with AA or BB, you don't continue beyond those). How can we model this?

Say that team A is the better team. There are three possible outcomes where team A is the overall winner: AA, ABA, and BAA. Is the result then that they'd win 50% of the time? Of course not, and it's because the odds of each these outcomes is different.

The binomial probability mass function ends up being appropriate here. However...we don't know the exact number of games where the series stops, so we need to sum multiple results (this is sometimes called the cumulative probability).

Let N be the possible number of games (7 in a 'best of 7'), Wins be the number of required wins (4 in a 'best of 7'), and p be the per-game win probability.

Lets compare. Calculating it line by line:
• AA = p^2 = (0.6)^2 = 0.36
• ABA = p^2*(1 - p) = 0.144
• BAA = p^2*(1 - p) = 0.144
For a total of 0.648, or 64.8%. This is exactly what our equation defined above yields.

Running a quick simulation of 1,000,000 of these series, I get 648,274 series won by team A, which matches our result.

Scaling It Up

Now that we have a general solution that's easy to automate, we can test out a bunch of stuff. If team A has a per-game win chance of 60%, how long should the series be for team A to be expected to win 90% of the time? Running the numbers, it needs to be a 'best of 41' series. What if the per-game win chance is 80%? A 'best of 5' series is sufficient. The plot below shows these tests for a number of win probabilities (the value for the 1-game series tells you what the per-game probability is):

Note that if you want the opposite (i.e., how often will the worse team win), you can either do '1 - probability from above', or replace the per-game win probability with the corresponding value (e.g., if the better team wins 60% of the time, the worse team wins 40% of the time).

Where Does This Fall Apart

A number of places. The most obvious is that the per-game win chances vary from game to game according to an endless number of factors (which team is at home, how much depth do they have vs fatigue, etc.). Fun exercise nonetheless.