Some of the most powerful theories in existence have come from paradoxes and thought experiments. One that continues to fascinate me is the St. Petersburg Paradox, probably because of its simplicity.
At its core, the St. Petersburg Paradox is a deceptively simple coin-flipping game. The rules are: flip a fair coin until it lands heads. The payout is 2ⁿ ducats, where n is the number of flips it took. So if the first flip is heads, you get 2 ducats. If it takes three flips, you get 8 ducats. On paper, the expected value of this game is unbounded — mathematically, you should be willing to pay any amount to play. Sounds like a great deal.
But here’s the twist: when Daniel Bernoulli informally polled people on how much they would actually pay to enter the game, no one was willing to go beyond 20 ducats. That’s the paradox. Despite its theoretical appeal, real people in a real world don’t make decisions as this math predicts.
At first glance, this seems intuitive: nobody feels infinite excitement over the tiny chance of a massive win. But resolving this paradox led to something deeper — the spark that eventually led to Utility Theory.
Bernoulli proposed that humans don’t make decisions based solely on raw monetary value. Instead, we act based on the utility that money brings us, which diminishes as we get wealthier. In other words, earning your first 10 ducats might be life-changing, but your 10,000th? Not so much.
He captured this using a logarithmic function — utility grows with wealth, but at a decreasing rate. When you run the expected value of the game through this utility function instead of raw payoffs, suddenly the expectation becomes finite and reasonable. The paradox dissolves.
This insight was provoking. It formalized the idea that uncertainty and subjective value must be baked into rational decision-making. And it paved the way for later theories like Prospect Theory, which takes an even more nuanced view of human behavior under uncertainty.
The challenge of modeling utility — capturing how humans value outcomes in context — remains one of the most complex and high-leverage problems in decision science. We’ve been using from handcrafted KPIs and heuristics to techniques like Inverse Reinforcement Learning.
Across that spectrum, the central question persists: how do we quantify what rational agents really want?
St. Petersburg Paradox: https://lnkd.in/gMNJUydZ
