Tournament Math: ICM, cEV, $EV, and Bubbles

By | Mathematics, Tournaments | 2 Comments

So you want to be a proficient tournament poker player? One of the most important things to learn is how poker tournaments are different from cash games. The game of poker is the same no matter where you're playing it, but the fixed payout structure of tournaments change the mathematics behind correct decisions.

cEV and $EV in Tournaments

The first concept we have to address in a lecture about poker tournament math is that of Expected Value. In cash game poker, your EV is the amount of money that you make or lose on any action. For example if you make a really awful call, you could lose $50 in EV, whether you end up winning or losing the hand (If you don't follow me so far, go back to our Strategy lecture on Expected Value). In tournament poker, however, you're working with two kinds of EV: Chip EV (cEV) and Dollar EV ($EV). cEV represents the amount of chips you stand to make or lose on a particular action, while $EV represents the amount of money you stand to make or lose on a particular action.

The important thing to note is that these are different. Why is this important? In cash games, these two things are identical. At any point, you can stand up with your entire chip stack and turn it into dollars. Therefore, if you win $1 in chips, you also win $1 in dollars. In tournaments, you don't have the luxury of cashing in your chips. Having more chips makes it more likely that you are going to win money in the tournament, but these two variables are not aligned in a 1-to-1 ratio like they are in a cash game. Instead they begin to diverge from the first hand.

When you first sit down at a poker table, you have exactly the same number of chips as anyone else at the table and, therefore, exactly the same chance of anyone of winning money in the tournament (skill aside). This means that your cEV is equal to your $EV. After you play your first hand, if you win 100 chips (a very small amount), you have gained a cEV of +100 and slightly increased your $EV. The two values have not yet diverged significantly.


The bubble is defined as the last position in the tournament that doesn't pay out. As tournament play progresses closer to the bubble, though, significant changes happen. Imagine that there are seven players remaining and the top six places pay out, and you are dealt a hand like JJ. A player (who has you covered) goes all in. If you call, you are probably about 65% to double your chips and make at least the minimum payout, but you are also 35% to go home with 0 chips and $0. So your cEV is positive: in a cash game, if you were 65% to win, you'd want to take this gamble. But in a tournament, it's not quite so simple. Doubling up will NOT actually double the amount of DOLLARS that you have, only the amount of CHIPS. While your DOLLAR gain will be equal to the 6th place prize and then a portion of the remaining prize pool, this has much less value than the same double-up in a cash game.

It's a complicated concept, but in theory, the thing to remember is this: every chip you gain is worth slightly less than the last chip you gained. Going from 1,000 chips to 1,300 chips on the first hand of a tournament is a solid win. Going from 100,000 chips to 100,300 chips late in the tournament is less valuable in terms of dollars taken home at the end of the night. In practice, this means you should be hesitant to flip coins for very small cEV gains when close to the button, as what appears to be a profitable bet is actually (in $EV) a losing play. Winning tournament poker players only care about the $ they take home at the end of the night, not the quantity of the chips they have.

The cardinal rule of bubble play is: "get in the money first, then get to 1st." You should be doing whatever it takes to beat the bubble, whether that's playing fewer hands, stalling your table (within reason) in order to let the blinds go up just as they pass you, or folding when a player puts you to a hard decision. At this point in a tournament, bubble dynamics trump any existing poker strategy you may have.

Due to the non-linear value of tournament chips, the chips that you risk will be of a lesser value than those you stand to gain.

Independent Chip Model (ICM)

The concept that "every chip you gain is worth less than the last chip you gained" is called the Independent Chip Model. It is a mathematical way to convert the number of chips you have into your tournament "equity" -- roughly, the amount of dollars your stack is "worth."

ICM is relatively self-explanatory if you understand the differences between cEV and $EV, and we really only use ICM at the end of a tournament when players are talking about splitting the final results.

After the bubble bursts

Play for first. After the bubble bursts, all considerations surrounding the bubble become much less important. Now, the cEV and $EV disparity, which has increased throughout the tournament to this point, have disappeared. Now, your chip EV and your $ EV are identical again. The 1st place position will take home the most money, and the only way to get 1st in a tournament is to gather every chip on the table, so begin playing according to cEV (cash game) strategies once again.

Basic Poker Math

By | Beginner, Mathematics | 2 Comments

Poker is a game of skill buried deep within a game of luck. In this article we're going to cover expected value, equity, risk, odds, outs, and all of the other "luck" factors about a poker hand.

Expected Value

Expected value is the amount of money you stand to win or lose when you make a bet. It applies in any gambling situation. It's basic probability: if you're flipping a coin and you wager $5 to win $10, your EV is $5, which is a $0 gain. Your possible outcomes are {0, 10}. These are two options and the average of them is $5. If you wager $5 to win $20, your EV is $10, a gain of $5. Your possible outcomes are {0, 20}. Divided by two, equals ten. If you wager $5 to win $2, your EV is $1, a loss of $4. Your possible outcomes are {0, 2}, divided by 2, is $1.

Now let's talk in terms of dice. If you wager $5 on a six to win $30, your EV is $5 (a gain of $0). Your possible outcomes are {0,0,0,0,0,30}. The average of which is $5. If you wager $5 on a six to win $40, your EV is $6.66 (a gain of $1.66). Your possible outcomes are {0,0,0,0,0,40}. The average of which is $6.66. If you wager $5 on a six to win $6, your EV is $1 (a loss of $4). Your possible outcomes are {0,0,0,0,0,6}. The average of which is $1.

You want to make bets where your EV gain is GREATER THAN $0. If you make a bet with an EV of $0, you're gambling for no gain. If you make a bet where your EV is less than $0, you're gambling for a loss.

Counting Your Outs in Poker

Outs are defined as any card that might come that would give you the best hand, assuming you don't already have the best hand. You usually count outs when you have something like a draw or when you have a good-but-not-great hand and think you may need to improve to beat your opponent. If you think you have the best hand, you don't have to count your outs, you're just going to try to put as much money in as possible.

Hand example #1

Let's talk about the most basic "outs" situation, when you have a flush draw, like on a board of . In this hand, you have to assume you don't have the best hand. If your opponent goes all in, he probably has a pair. He could just have a draw, like the or , in which case you're ahead, but we're not going to consider those situations because we want to talk about counting outs.

First, count how many specific cards, out of 52, would make you the best hand. There are 13 spades in a deck (and of every suit), but on this board, we already see four of them: . That means that there are nine spades unaccounted for. You can assume that, any time a spade comes, you're going to have the best hand. Of course, if something like was to come and your opponent makes a full house, it might cost you the hand, but that's very rare, so it's considered an out. We're going to assume that nine cards would come to give you the best hand.

That is, of course, if you don't think your is an out. If the would give you the best hand, like if he just had top pair, then you'd also have the best hand, so you have some extra outs here. This would add three overcard outs to your hand if top pair would win you the hand. On this board, it's probably enough to win the pot.

With your spades plus the ace, you've got nine outs for the flush and three outs for top pair, but the overcard won't always be an out, so it's prudent to count maybe one and a half outs in this case. That would mean that 50% of the time, your ace is good. A fair assumption, especially since you have a weak kicker.

In this hand, you've got somewhere between 9 and 11 outs. In the next section of this article, we're going to discuss how to convert your outs to equity using the Rule of Four. Until then, I'll just tell you that you're 40% to win this hand. That's really good odds if there's any kind of overlay in the pot. You can tend to get all in if you don't have a really big stack here. You can also opt to raise all in and use your fold equity to add value to your hand.

Knowing how to count outs will keep you from overvaluing your hand but also ensure that you do get the proper value for the hand you do have.

Read 3 examples on counting outs.

Converting Outs to Equity in Poker

Your equity is defined as the percent of the time you're going to win the hand. If you've got a flush draw, you know your equity is about 35%, but how do you get that number? You can use software called PokerStove (watch the video).

Hand example #1

Let's say we have a flush draw. We have the on a board.

We know from the above section on counting outs that we have nine outs for the flush draw and probably three outs for the ace, but since those are partial outs, we'll count two outs for the ace. That gives us a total of 11 outs.

Here's where we learn the Rule of Four. Multiply your number of outs times four. This is the equity of your hand and the percent of the time that you will win if the hand was to get all-in right now.

The Rule of Four says that if we were to get all in right here, we would win the hand about 44% of the time, because we have 11 outs. In fact, when we do use PokerStove to calculate our chance to win, the actual chance is 45%.

Hand example #2

Let's say we have the on a board. We know if we catch a jack we're going to win, but let's say they have a hand like T9 for two pair. We're also going to have to survive a ten or a nine coming on the river. Four outs for the jack time the rule of four is about 16%, but we're actually going to be a little less than 16%.

Simply put, drawing for a gutshot against two pair gives us fairly poor equity. On the other hand, instead of having T9 had something like T4, we'd have a lot more equity, about 40%, because now our kings and queens are both outs.

The rule of four is all you need to calculate your equity at the table, and knowing your equity is good because it tells you what pot odds you need to call a bet.

Pot Odds and Implied Odds

Pot odds are the odds that the pot is laying you to call a bet.

Example: There are 300 chips in the pot, and your opponent bets 100 chips.
If you call, you’ll be putting in 20% of the pot (100 chips in a 500 chip pot).
You can call if your pot equity is greater than 20%. Remember, your pot equity is the percentage that we calculated in the last section.

Implied odds represent the money that will go in the pot after you catch your draw. Calculating your implied odds is a little more involved than calculating your pot odds, but it is one of the things that is crucial to understanding where you stand in the hand.

To calculate your implied odds:
Step 1: Multiply the size of the pot after calling times .6.
Step 2: Multiply this number by a number between .1 and .9 which is an educated opponent-dependent guess and represents his likelihood to bet the next street or call a bet on the next street. A higher number represents a greater likelihood of putting in a bet on the next street.

Example: Consider a 600 chip pot versus a very aggressive opponent who bets 300 chips on the turn.
My pot odds dictate that my draw needs 25% equity to call.
My implied odds are worth 1200 * .6 * .7, or about 500 chips. Instead of risking 300 chips to win 1200, I am risking 300 chips to win 1700. Now, I only need 17.6% equity to call.

Several more of our hands are worth a call once we consider the implied odds! Let’s talk for a minute about the “opponent-dependent educated guess” number. I typically use the following calculations:

Tight Passive Player – 0.2
Loose Passive Player – 0.4
Aggressive Player – 0.6
Maniac – 0.8

Breaking Down The Math of a Continuation Bet

By | Beginner, Mathematics | One Comment

Playing with a Small Bankroll

By | Beginner, Mathematics | No Comments

While poker theory is a relatively new field, modern economic theory has been around for decades. The question "How do I maximize my long-term profits in risky ventures?" has been answered with finality. You can use this calculator to find out just how much you should buy in for in a heads-up game.

Why choose a Kelly strategy? For starters, it completely negates the concept of risk-of-ruin. The Kelly involves fluidly moving up and down between limits, as dictated by your bankroll. If you go on a tear, you will move up in limits. If you run bad, you are able to drop down.

The math used in this calculator comes from this article.

Bankrolling Finally Makes Sense

By | Mathematics | 2 Comments

While poker theory is a relatively new field, modern economic theory has been around for decades. The question "How do I maximize my long-term profits in risky ventures?" has been answered with finality. The Kelly Criterion, according to Chapter 24 of Chen and Ankenman's Mathematics of Poker, will do better than any essentially different strategy in the long run. Why choose a Kelly strategy? For starters, it completely negates the concept of risk-of-ruin. The kelly involves fluidly moving up and down between limits, as dictated by your bankroll. If you go on a tear, you will move up in limits. If you run bad, you are able to drop down. Risk-of-ruin calculations use the premise, "Assuming I will play $20 games until I am broke or robusto, what is the likelyhood that I will one day be broke?" I would contend that this assumption is complete horse shit on all counts, and kelly simply outperforms it on every level.

Read More

SplitSuit Poker Coupon Code

By | Mathematics, Theory | No Comments

Coupon Code: RISK
Value: 10% off your order total

My personal recommendation: James and I got our start together in the casinos of Upstate NY, where I thought I was the best player in the circuit. Then I met James -- every bit as good as I was. There are lots of "professionals" out there, and then there are people for whom poker is a profession. James takes playing and coaching poker seriously, just as he has since 2008 when we first met.

Math to Live By

By | Mathematics | One Comment

Everybody's been exposed to statistics at some point or another. The study of statistics is two-fold. First, there's applied statistics, which we use in applications like my Variance & Bankroll Calculators, that let us know if what we're experiencing is out of the norm. Second, there's statistical theory, which can help us avoid making general errors in judgment. Three major theoretical concepts stand out, the Law of Large Numbers, the Central Limit Theorem, and Sampling Bias.

Read More

Sage busts Lee Jones’ SAGE

By | Heads Up, Mathematics, Theory | No Comments

"Are you Sage?" Lee Jones asks. Yes, I am. And I'm here to show people why your system is a weaker version of the Better Than Nash Equilibriums here on Risk Oriented.

In 2006, Lee Jones created a system which tired MTT players, heads up at the end of a tournament, could memorize and effectively use to push-bot with very big blinds.

Fortunately for me, any time you simplify a complex system like a game theory optimal solution, cracks begin to form.

Leak: The SAGE system "stops working" above 10BB.

Exploit: Simple, if the player has more than 10BB, assume he's using a different strategy. Exploit that strategy. For example, a SAGE player might be push/folding when under 10BB, but minraising over 10BB. This is a good indication that your opponent is using SAGE. If you can figure out the percent of hands he minraises, then you can create a counter strategy that includes calls, 3-bets, and folds. This kind of deep-stack exploitative play is outside the scope of this article.

Leak: The SAGE system requires perfect play.

Exploit: For example, if your opponent is minraising when he gets KK or AA, but playing push/fold any other time, you know that when he pushes, KK+ is not in his range. Therefore, the hands he pushes will, on average, be weaker. If his strategy was a proper equilibrium, this "weaker range" would dictate a lower "SAGE" number. If he is still shoving with his entire range, we should slightly open our calling range.

So, in short, don't use SAGE, because there are better alternatives available, for instance, the Better Than Nash Shoving Equilibriums I developed.

Better Than Nash Equilibriums for Poker (Game Theory)

By | Heads Up, Mathematics, Pro, Theory | 8 Comments

What if you were to find out the Nash Equilibrium for Poker that you've been using all this time was... wrong? Who actually did the math originally? Do you know? I surely don't. I've done the math and found that the Nash equilibriums for poker chart that so many new players use is actually wrong. It's true, the chart touted by thousands of poker players around the globe is bunk. It tells you to push hands you shouldn't. What is a Nash Equilibrium? It is a chart that brings you to 0EV. Playing according to Nash equilibriums for poker guarantees that you won't lose money, but the goal of poker is to win money, not avoid losing it.

Clearly, against a player who folds 100% of their hands, even at very large stack sizes we could profit by shoving hands that the Nash equilibrium charts would tell us to fold. This proves that there exists maximally exploitative all-in range (called a "best response") that is different from the Nash ranges. To solve for a better solution, I considered the three variables that actually matter: your hand range, your opponents calling range, and your effective stack size.

Register for free and get access to additional resources on our Better Than Nash solution.

1 - Beating the Nash Equilibriums for Poker

In this first chart, I will posit the assumption that our opponent will call with no more than 30% of their hands. If this is the case, then the following chart illustrates the hands you should push with based on your effective stack size.

A 30% Calling range is very loose and means the opponent knows that you are pushing light and is trying to call light in order to exploit you. Why 30%? A few reasons:

  1. Because you cannot know with any accuracy if your opponent's calling range is 25% or 35%. It seems like a good, middling range.
  2. Because it is the optimal calling range for someone shoving 40-50% of their hands. I believe people intuitively settle on something like 30% for somebody who is opening "very loose"
  3. Because many players have a real issue calling a shove with hands like Q5, even though it may be optimal.

Chart #0: Opponent Calls Your All-in 30% Of The Time
nash equilibriums for poker

Teal: Push <20BB
Red: Push <15BB
Purple: Push <10BB
Blue: Push <7.5BB

Unless our opponent is willing to call extremely light, we are pushing 100% of our range under 7.5BB.

For the rest of these charts, the following legend applies.

Blue/Red: Always Push These Hands No Matter What (Unexploitable)
White: Push 100% Of Hands If Opponent's Folding Range Includes a SINGLE Hand Colored in Red
Chart #1: 12BB Effective Stacks
better than nash equilibriums for poker 12bb

11BB chart removed. Log in or register for free to access it.

Chart #3: 10BB Effective Stacks
better than nash equilibriums for poker 10bb

Chart #4: 9BB Effective Stacks
better than nash equilibriums for poker 9bb
8BB chart removed. Log in or register for free to access it.

Chart #6: 7BB Effective Stacks
better than nash equilibriums for poker 7bb

6BB chart removed. Log in or register for free to access it.

To clarify, if we're 7BB deep with our opponent, we shouldn't shove 32o unless our opponent folds a hand like Q8, but if he does, we should be happy to push all in with 32o. If we were to use the Nash equilibrium charts, we would fold 32o at 6BB no matter what our opponents strategy. In fact, we would even fold hands significantly stronger than 32o, like J4o. This accounts for another 30% of our range that we could be profitably shoving, instead of folding.

Against an opponent who will not fold a hand in red, you play hands in white according to the Nash Equilibrium strategy found here.

2 - What is a Nash Equilibrium?

The brilliant economist, John Nash, in the 1950s, developed a system by which zero-sum games can be solved. He put forward the question, "If everybody is trying to maximize the amount of money they win, what is the strategy that each player should rationally adopt?"

3 - Why do we use the Nash Equilibrium?

A sit and go is very similar to a zero-sum game where each player is trying to rationally win more money. Therefore, some enterprising poker minds, originally, the Austrian Helmuth Melcher, have used Nash's equations and assumptions to develop an equilibrium strategy for play. You can find the Nash Equilibrium developed by Mr. Melcher here. Unfortunately, an equilibrium strategy results in an expected value of ZERO, which means you lose money to the rake by playing this strategy.

4 - Why is the Nash Equilibrium insufficient?

Nash supposes several things that are simply not true about poker. Nash's assumptions:

  1. The players will do their utmost to maximize their expected payoffThis should be the case in poker, but emotions, fears, and irrationality still exist and can get in the way. Additionally, when using Nash equilibriums for poker, we have to assume that every game is independent of each other, but this is not the case in poker. An otherwise rational player might be hesitant to take a big risk with a large portion of his bankroll, for instance. We have no way of knowing what other factors are involved in our opponents' decisions.
  2. The players are flawless in execution. -- No human player can be flawless in their execution of any strategy. Even if we assume they were, read on...
  3. The players have sufficient intelligence to deduce the solution. Ultimately, poker is too complex to be solved. We can reach some solutions for specific questions, like, "should we go all-in preflop," but in these cases our opponent has yet to act. Once the opponent acts, the permutations of possibilities are endless. To deduce the solution, we would need to know exactly how he plays every hand.
  4. The players know the planned equilibrium strategy of all the other players. We cannot know this, ever. Best case scenario, we're playing an opponent that we know is using a specific system, exactly. A good example of this is the SAGE system. If we know this, we can define an optimal strategy, but it is not a Nash equilibrium.
  5. The players believe that a deviation in their own strategy will not cause deviations by any other players. If I deviate from my strategy, other players should and will deviate from theirs to exploit me. Why does Nash require this assumption? It's very simple. If I deviate from my strategy, and it causes my opponent to deviate in order to exploit me, he has unbalanced his strategy. If I know that he has unbalanced his strategy, I should find an exploit for his strategy. Then, he should find an exploit for mine. According to Nash, all of these levels must happen between hands, in an instant, essentially bringing us back to a Nash equilibrium. This idea that every player engages in Nth-level metagame on every hand is only possible in theory.
  6. There is common knowledge that all players meet these conditions. So, not only must each player know that the other players meet the conditions, but they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on. This is not the way poker works. Nash is giving us an academic solution to a problem. It simply does not apply as well to poker as many players think it does.