Expected Value (EV)
The foundation for Subjective Expected Utility Theory (SEU), a dominant model of rational choice under risk and uncertainty
Expected value (EV) refers to the long-term average return of a risky investment or gamble. It can be calculated with basic math when given both (1) outcome likelihoods (probabilities or other representations of outcome likelihood) and (2) outcome values (numerical quantities in a certain currency, such as dollars, pizzas, casino chips, or just points or units). If those numbers (outcome likelihoods and values) are not known, EV can be approximated by estimated likelihoods and values. EV can also be inferred from experience by calculating the mean return from a repeated risky decision, as lone as one has the opportunity to repeat the same decision over and over. See the next essay in this Substack for thoughts on a couple of the many challenges of estimating EV from experience. EV is one of the simplest measures to evaluate whether a decision is good or bad, with the choice that results in the highest EV being the best choice one can make, assuming maximizing average returns is the goal and one’s assessments of likelihoods and outcome values is accurate (two assumptions that are often false).
The dominant classical model of rational choice when making decisions under risk and uncertainty, Subjective Expected Utility (SEU) Theory, is a psychologically nuanced version of EV that takes into account the subjective nature of both probability assessment and of what one values. As such, understanding EV is central to understanding SEU, and in understanding its strengths and weaknesses as a normative and descriptive model of decision making under risk and uncertainty. Even if EV were not foundational to SEU Theory, it is unquestionably foundational to evaluating whether or not a casino gamble provides a positive or negative expected return to the gambler. One of the themes of the Casino Cognition Substack is that there is far more to decision making—including gambling decision making—than EV. But as a starting point, EV is arguably the best place to start.
The purpose of this essay is to provide a basic introduction to EV, enough about it to understand its meaning and relevance for both gambling and for describing how, and how well, people make decisions under risk and uncertainty. EV is a domain-general mathematical model, “experience far” relative to the kind of thinking that generally goes in to making decisions, so apologies in advance for that. That said, please bear with me (and feel free to skip the math if it doesn’t interest you): the concept is foundational to evaluating investments, gambles, and risky and uncertain decisions more generally, and so it’s a useful concept to understand.
Alternatively, if the math and background below is more than you’re looking for, but you’d like a quick description of EV and a simple calculation example, see the glossary entry from the Casino Cognition website, here.
The 1654 letters between Blaise Pascal and Pierre de Fermat: the birth of expected value and probability theory
Imagine you are playing a game of one-on-one basketball to 21 points for $100. You are leading 18 to 15 when it gets too dark to continue and so you and your opponent agree to split the pot based on each player’s likelihood of winning. For the sake of argument, imagine that the previous points should not be taken as an indicator of difference in skill. In other words, you assume that you each have a 50:50 chance of winning each subsequent point regardless of previous performance. How can you calculate your likelihood of winning so as to fairly divide the winnings?
A similar question was posed in 1654 by the French nobleman and noted gambler, the Chevalier de Méré, to the mathematician Blaise Pascal (of “Pascal’s Wager”), though not about basketball, of course. De Méré had been playing a game of points that had to end early, and wanted to know how to fairly divide the wager based on each player’s likelihood of winning (rather than canceling the bet altogether or giving the entire prize to the player with the early lead). The question led to a famous exchange of letters between Pascal and another noted mathematician, Pierre de Fermat (of, for example, “Fermat’s Last Theorem”). Their solution was to look at all possible ways the game could have finished if played to completion from the point when they had to stop, effectively giving birth to both probability theory, including “Pascal’s Triangle”, and the first mathematical framework for calculating expected value (EV).
EV for Roll Outcomes of a Pair of Dice
In its generalized form, expected value equals the sum of all products of (a) the likelihood of each outcome and (b) the value of that outcome. An easy example is the roll of a fair pair of dice, where fair implies that each of the six sides of the dice has an equal likelihood of occurring on each roll.
A roll total of 12, which requires two sixes, will occur, on average over the long run, one in 36 times (1/6 * 1/6). If “roll totals of 12” has a value of $1 and all other rolls have a value of 0, then a roll of 12 has an expected value of 2.7 cents (1/36th of a dollar), since that’s the average amount you would win for a single roll of the dice, if the gamble could be repeated indefinitely. More generally, the expected value would be 2.7% of whatever the payout might be for rolling a pair of sixes. Of course, most gambles include the possibility of losing money, and not just the promise of an unlikely win. If you were to win $1 each time you rolled a dice total of 12 and lose $1 each time you rolled a dice total of anything other than 12, the expected value of that gamble would be -94.4 cents (- $1 * 35/36 + $1 * 1/36 = -$34/36 or -$0.944). In other words, you could expect to lose 94 cents for every dollar gambled in such a game of dice, on average, if you repeated that gamble over and over again, ad infinitum.
What if instead of “all other outcomes,” you were competing with just one other dice total? For example, say you win $1 every time you get a total of 12, and lose $1 every time you get a total of 7, and all other outcomes are a push (you don’t win or lose anything). Is the expected value even, in your favor, or against you? While it might not be immediately obvious, 7s are far more common than 12 because there are many more ways to roll a 7 than a 12. Only one of the 36 possible dice permutations can be a 12 (two sixes). A seven with a 2 and a 5 or a 5 and a 2 (already twice as likely as a 12 with 2 of 36 of the possible outcomes instead of just 1). But it can also be rolled with a 3 and 4, a 4 and 3, a 1 and 6, or a 6 and 1. Together that is 6 of the 36 possible roles, six times more likely than getting double sixes. If you accepted a gamble from me where I paid you a dollar each time you rolled a 12 and you paid me a dollar each time I rolled a 7 and neither of us won or lost for other rolls, you would be gambling with an expected loss of -13.9 cents per round ($1 * 1/36 [your chance of winning times the amount you would win] - $1 * 6/36 [my chance of winning times the amount you would lose] - 0 * 29/36 [the chance for all other outcomes times the amount that would cost you, nothing] = -5/36 or -$0.139). My expected value, on the other hand, would be the inverse of that, an average win of +13.9 cents per round. In more general terms, your expected value would be a cost of -13.9% of the amount risked, whereas my expected value would be a gain of 13.9% of the amount risked: a sucker’s bet.
Back to the Basketball Example
So what about that initial basketball example? There are a few ways to solve the problem but the easiest way to approach it may be to start by imagining how many total points at most could be scored before a winner must be determined. In the imagined game, the opponent has 15 points and you have 18 points and the game ends at 21. The highest possible score is 21 to 20, which could happen either if your opponent got 6 more points and you only 2 (with the opponent winning 21 to your 20), or if you got 3 more points and your opponent only 5 (with you winning). That is 8 total points more. Since each round is equally likely, it is a relatively straight forward—if time consuming—process to write out each possible set of (256!) permutations of 8 rounds of play and then simply count what percentage of those have 3 or more wins for you (in which case you’d be the one winning) and what percentage of those sets of 8 points your opponent got 6 or more points (in which case they’d be the winner). Of course, you’d often win before 8 rounds are played and your opponent could win in fewer than 8 rounds, too, but by imagining all 8 rounds are played to completion even if the game is won earlier, it allows for considering the entire problem space of equally probable outcomes, making it easier to conceptualize EV. In practice, however, it is easier to limit the analysis to the fewer scenarios when your opponent would win, which could only be accomplished with 6, 7, or 8 wins across the 8 total rounds, since your opponents likelihood of getting the 6 points needed to win before you get the 3 points needed to win is exactly the inverse of your likelihood of getting 3 points first, and it reguires fewer steps.
Luckily, there are nice mathematical formulas for calculating permutations and corresponding likelihoods thanks to that correspondence between Pascal and Fermat. The most general version for repeated events with exactly two possible outcomes (in this case either a point for you or a point for your opponent) is the binomial probability formula:
The formula would need to be applied repeatedly to count the probability for each value of k from 0 to 8 (where k is the number of points you won after 8 points have been scored by either you or your opponent), and then those likelihoods would need to be added up for each of the values of k when you received 2 or fewer points (and therefore your opponent received 6 or more points and won; that is, cases when k = 0, 1, or 2). Luckily with 50:50 likelihoods for each of the two possible outcomes (winning a point and losing a point both have a likelihood of 50% in this scenario), the formula can be simplified as the binomial coefficient divided by the total number of possible permutations across the 8 points (2⁸ or 256 permutations).
When k = 0, there is 1 permutation (LLLLLLLL), and the likelihood is therefore 1/256.
When k = 1, there are 8 permutations (LLLLLLLW, LLLLLLWL, etc.), so the likelihood is 8/256.
When k = 2, there are 28 permutations, so the likelihood is 28/258.
The total likelihood of your opponent winning, then, is the sum of those three probabilities (1/256 + 8/256 + 28/256 = 37/256 = 0.1445 = 14.45%). Your opponent could be expected to win 14.45% of the time and you could be expected to win 85.55% (100%-14.45%).
In dollars, a fair division of the pot would be for you to finish with a gain of ($100 each or $200 total) or $171.09, ending with a gain of $71.09 on the hundred dollars bet. Your opponent, on the other hand, should get back 14.45% of that $200 or just $28.91, a loss of $71.09.
Note that your EV is $71.09 (or 71.09% of the amount wagered), not the 85.55% representing your likelihood of winning. Likelihood of winning times the amount to be won ($100 in this example) is not the same thing as EV, because it does not take into account the amount you can lose. If you have a 50% chance of winning $100 but also a 50% chance of losing that amount, then your EV is not 50%, it is 0. On average, over the long run, you will end up with the same amount you started with. In this case, with an 85.55% chance of winning $100 and a 14.55% chance of losing $100, the formula for calculating EV follows: $100*85.55% - $100*14.55% = 71.09%.
Estimating EV has come to be a central first step in assessing the value of any potential investment or gamble. It answers the question of what percentage of one’s “investment” one can expect to gain or lose, on average. As suggested, it also turns out to serve as the basis for the dominant domain-general mathematical model of rational choice, foundational to game theory, to utility theory, and to much of the field of behavioral decision theory.