Stable, but Stuck: When Nash Equilibria Fail Us
I’ve been reading Inadequate Equilibria recently, and it sent me down a familiar but still slightly unsettling line of thought: how often systems end up in states that are stable, self-reinforcing—and clearly not very good.
This is usually where the idea of a Nash equilibrium comes in. In simple terms, it describes a situation where no individual actor can improve their outcome by changing their strategy alone, assuming everyone else keeps doing what they’re doing. The system settles. It stops moving. But that doesn’t mean it’s optimal. It just means it’s hard to escape.
The classic example is the prisoner’s dilemma. Two actors, each making individually rational choices, end up with a worse collective outcome than if they had coordinated. What makes it powerful is not that it is a thought experiment, but that it maps disturbingly well onto real-world systems.
Arms Races: Stability Through Mutual Escalation
The Cold War nuclear buildup is perhaps the cleanest large-scale example. If one side arms while the other does not, the imbalance is unacceptable. So both arm. And once both are armed, neither can safely step back alone. The result is a stable equilibrium of mutual escalation—enormously costly, persistently dangerous, yet difficult to exit unilaterally.
From the perspective of each actor, the strategy is rational. From the perspective of the system, it is clearly suboptimal.
Advertising Wars: Spending to Stand Still
Something similar plays out in corporate competition. Companies like Coca-Cola and PepsiCo spend vast sums on advertising. If one side advertises heavily and the other does not, the advantage is obvious. So both advertise.
The equilibrium is stable: neither can afford to stop. But the collective outcome is questionable. Enormous resources are spent largely to maintain relative position rather than create absolute value. Each actor is doing the rational thing. The system, taken as a whole, is arguably not.
Traffic and the Illusion of Choice
Traffic congestion offers a more everyday version of the same dynamic. Each driver chooses the route that appears fastest individually. But when everyone does the same, the result is congestion that leaves everyone worse off.
This connects to Braess’s paradox, where adding a new road can actually make traffic worse by changing individual incentives. Again, no driver is behaving irrationally. The outcome emerges from individually optimal decisions that do not compose well at scale.
The Commons: Rational Depletion
Overfishing is a textbook case of what happens when shared resources meet individual incentives. Each fisher benefits from catching more fish today. If everyone restrains themselves, the system is sustainable. But no single fisher has an incentive to be the one who holds back.
The equilibrium becomes overexploitation. The long-term outcome—depleted stocks—is worse for everyone, but the short-term incentives keep the system locked in place.
Performing Busyness
Some of these equilibria are less visible but no less real. In many workplaces, long hours function as a signal of commitment or productivity. If you work less, you risk being seen as less dedicated. So everyone works more.
The equilibrium is one of collective overwork. It is stable because no individual can safely opt out. Yet it does not necessarily produce better outcomes, either in terms of productivity or well-being.
Credential Inflation
Academic and professional credentialing follows a similar pattern. More degrees, more publications, more measurable output. If you do not participate, you fall behind. So everyone does.
I was reminded of this reading The Case Against Education last year, which I found both provocative and oddly persuasive—even if much of it is framed around the U.S. system. Bryan Caplan argues that much of education functions less as skill-building and more as signaling: proof of traits like diligence, conformity, or baseline intelligence. If that is even partially true, then the escalation of credentials starts to look less like progress and more like an arms race.
The result is a steady increase in requirements with diminishing informational value. A bachelor’s becomes a baseline, then a master’s, then something more. Not necessarily because the work demands it, but because the signal needs to stay ahead of the crowd.
The equilibrium persists not because it is efficient, but because deviating from it is individually costly.
Climate Change: The Global Version
At the largest scale, climate change may be the most consequential example. Each country benefits, in the short term, from cheap and reliable fossil energy. Reducing emissions unilaterally can impose costs without immediately visible benefits.
The equilibrium becomes under-cooperation. Everyone would be better off with coordinated action, but coordination is hard, incentives are misaligned, and no single actor can fix the system alone.
Why These Equilibria Persist
What makes these situations interesting—and frustrating—is that they do not persist because people are irrational. Quite the opposite. They persist because people are responding rationally to the incentives they face.
Three forces tend to show up repeatedly:
- Misaligned incentives: What is good for the individual is not the same as what is good for the group.
- Coordination problems: Even if everyone would prefer a better outcome, getting there requires simultaneous change.
- Unilateral disadvantage: The first actor to deviate often pays a cost, even if the new state would be better if widely adopted.
Once a system settles into such a state, it can remain there for a very long time. Stability, in this sense, is not a sign of quality. It is a sign that no easy path of individual improvement exists.
Seeing the System
One of the more useful shifts in perspective is simply recognizing these patterns when they appear. Many systems that feel inefficient, frustrating, or strangely resistant to improvement are not broken in a random way. They are locked into equilibria.
This also explains why incremental individual effort often fails to fix them. If the problem is structural, then working harder within the same incentive system rarely changes the outcome. The equilibrium absorbs the effort and remains intact.
Beyond Stability
The interesting question, and perhaps the harder one, is how such equilibria can be escaped. The answer is rarely simple. It may involve coordination mechanisms, changes in incentives, regulation, or technological shifts that alter the payoff structure entirely.
But before any of that, there is a more basic step: noticing that stability is not the same as optimality.
A system can be stable, persistent, and even self-reinforcing—and still be a place no one would choose if they could redesign it from scratch.
That, more than anything else, seems to be the underlying theme: we are often not stuck because nothing better exists, but because moving to something better requires more than any one actor can do alone.
Comments
Post a Comment