Why Some Games Reward Alliances and Others Punish Them
Why game theorists obsess over how people can cooperate
When game theorists model a situation, they’re not just asking, “What do people want?” They’re asking a more annoying question: “What are they actually allowed to do together?”
That’s where the big divide shows up:
- In non‑cooperative games, players act individually. They can talk, threaten, bluff, whatever — but only actions that can’t be enforced by an outside authority really count.
- In cooperative games, players can form binding agreements. Think contracts, treaties, or any arrangement where walking away has real consequences.
So the real question isn’t, “Do people talk or coordinate?” People talk all the time. The question is: Can they commit in a way that’s enforceable? If they can, you’re drifting toward the cooperative side. If they can’t, you’re in the non‑cooperative world, where promises are cheap and betrayal is just another strategy.
A prison, a bridge, and a boardroom: three familiar situations
Take three classic scenarios you’ve probably seen in some form.
First, the famous Prisoner’s Dilemma. Two suspects are interrogated separately. Each can either stay silent or betray the other. The payoffs are set up so that betraying is individually better no matter what the other does, but if both betray, they both end up worse off than if they had both stayed silent. No binding promises, no enforceable deals. That’s a textbook non‑cooperative game.
Now imagine two cities deciding whether to share the cost of a new bridge. They can sign a legal contract, enforce contributions through courts, and specify how toll revenue will be split for the next 30 years. That setting is naturally modeled as a cooperative game: the key object is the coalition (the cities together) and how they share the gains.
Finally, picture three tech firms around a table, discussing a joint standard. They can form alliances, but antitrust law limits what they can enforce. Some side deals are legal, some are not, and some are just “gentlemen’s agreements” that might evaporate under pressure. That’s actually messy: part cooperative, part non‑cooperative. Game theory forces you to decide which promises are real, and which are just wishful thinking.
Non‑cooperative games: where every move is on your own head
Non‑cooperative game theory is the home of a lot of famous puzzles and strategy problems.
What actually defines a non‑cooperative game?
Under the hood, a non‑cooperative game usually comes with:
- A set of players (two prisoners, three firms, many voters, etc.).
- A strategy set for each player (confess or stay silent, set a high price or low price, choose rock/paper/scissors).
- A payoff for each player, depending on everyone’s strategies.
The key point: no binding agreements are built into the model. Any coordination has to emerge from individual incentives.
The central solution concept here is the Nash equilibrium: a strategy profile where no player can improve their payoff by unilaterally changing their strategy, given what everyone else is doing. It’s not about fairness, and it’s not about what they “should” agree to morally. It’s about what’s stable when everyone is selfish and rational under the rules of the game.
Why puzzles like the Prisoner’s Dilemma feel so annoying
In the Prisoner’s Dilemma, “betray” is a dominant strategy for each player. That means: whatever the other does, betraying gives you a better payoff. So the unique Nash equilibrium is that both betray.
But if you step back and look at the outcome, both are worse off than if they had both stayed silent. You get this weird tension:
- Individually rational behavior → both betray.
- Collectively better outcome → both stay silent.
This is exactly the kind of conflict non‑cooperative games are good at exposing. They show you how individual incentives can clash with group welfare, even when everyone is perfectly rational.
A quick detour into real‑world strategy
Think about two competing airlines deciding on ticket prices for a popular route. Each can choose high prices (fat margins) or low prices (steal market share). If both go high, they both earn nicely. If one goes low and the other stays high, the discounter wins big. If both go low, profits collapse.
No enforceable agreement to fix prices is allowed — that’s literally illegal under U.S. antitrust law. So whatever they “hint” to each other at conferences, the underlying model is non‑cooperative. The Nash equilibrium often ends up with both pushing prices down more than they’d like, mirroring the logic of the Prisoner’s Dilemma.
For more formal introductions to this style of modeling, economics departments at places like MIT and Stanford host open courseware that walks through standard examples.
Cooperative games: where coalitions matter more than individuals
Now flip the lens. Instead of asking, “What strategy does each player pick?” cooperative game theory asks, “If any group of players can band together, what can they guarantee themselves?”
The coalition viewpoint
In a cooperative game, you usually start with:
- A set of players.
- A worth function that assigns a value to every possible coalition: what can that group achieve if they work together and act as a unit?
The game then becomes a question of how to split that value among the members of a coalition so that nobody has a strong incentive to walk away and form a different group.
This is where concepts like the core and the Shapley value live:
- The core is the set of payoff allocations where no coalition can break away and make all of its members strictly better off.
- The Shapley value assigns a payoff to each player based on their average marginal contribution across all possible orders in which the grand coalition could form.
When three companies try to share a pie
Imagine three companies — call them A, B, and C — that can jointly build a new logistics network. Any subset of them can collaborate, but the more players involved, the more value they can create.
Maybe:
- A and B together can earn $10 million.
- B and C together can earn $12 million.
- A and C together can earn $9 million.
- All three together can earn $20 million.
The cooperative question is not just “Will they all join?” but also: “How do they divide the $20 million so that nobody wants to peel off into a smaller alliance?”
If A and C feel underpaid, they might leave B out and form their own coalition. If B and C can do better on their own, they might do exactly that.
Cooperative game theory gives you tools to:
- Check whether there is any allocation in the core (so no subgroup wants to defect).
- Compute something like the Shapley value to capture each company’s average contribution and use that as a bargaining benchmark.
A real‑world flavor: political coalitions
Think about a parliamentary system where no single party has a majority. Parties must form coalitions to govern. A small “pivotal” party can suddenly become very powerful if it’s needed to reach a majority, even if its vote share is modest.
Cooperative models capture that leverage by looking at which coalitions can reach a majority and how often each party is “critical” in forming a winning coalition. Power indices like the Banzhaf index or the Shapley–Shubik index are used to quantify that influence.
If you’re curious about how these ideas are used in political science and economics, many university lecture notes hosted on .edu domains (for example, Yale’s open courses) walk through coalition formation and power indices in more detail.
The same situation can be modeled both ways — and that matters
Here’s where things get interesting for puzzle and game‑theory fans: the very same real‑world scenario can often be represented as either a cooperative or a non‑cooperative game, depending on what you assume about enforceability.
Take climate agreements. Countries meet, promise emissions cuts, and sign treaties. But enforcement is tricky. Sanctions are political, monitoring is noisy, and backing out is always an option.
- If you treat the treaty as fully enforceable, you lean toward a cooperative model: the “grand coalition” of all countries chooses a joint plan and splits the costs and benefits.
- If you think enforcement is weak, you lean toward non‑cooperative repeated games: each country chooses its emissions path, anticipating how others will react over time.
The same goes for business cartels, labor unions, or even roommates sharing rent. The line between “we can really commit” and “we’re just hoping everyone behaves” is actually pretty thin.
Game theory forces you to make that line explicit.
Why puzzle lovers should care about the cooperative vs non‑cooperative split
If you enjoy logic puzzles or strategic board games, you’re already playing with these ideas, whether you call them that or not.
Think about a negotiation game where players can form temporary alliances. If alliances are just talk — no binding agreements — then any “promise” can be broken the moment it stops being convenient. You’re in non‑cooperative territory, and you should be thinking in terms of sequential moves, credible threats, and Nash equilibria.
But if the game rules say, “Once an alliance is formed, it cannot be broken for three turns, and profits are shared exactly as written,” that’s almost a miniature cooperative game. Coalitions are real objects with enforceable consequences.
In puzzle design, this choice is actually a design lever:
- Want tension and betrayal? Remove enforceable commitments.
- Want long‑term planning and stable alliances? Bake in binding coalition rules.
Once you start seeing this, you notice that many social games, from online multiplayer titles to tabletop strategy, live somewhere on the spectrum between purely non‑cooperative and fully cooperative — and the feel of the game depends heavily on where it sits.
When the two theories talk to each other
There’s a long‑running debate in the field: Is cooperative game theory just a shortcut for analyzing very complex non‑cooperative games?
One influential idea is that you can take a cooperative setting and “unpack” it into a detailed non‑cooperative model that includes bargaining procedures, communication protocols, and enforcement mechanisms. Under certain conditions, the outcomes of that non‑cooperative bargaining game line up with cooperative solutions like the core or the Shapley value.
The punchline: cooperative and non‑cooperative models aren’t enemies. They’re two ways of looking at the same underlying strategic situation, with different levels of detail.
- Use non‑cooperative models when you care about the timing of moves, threats, and specific strategies.
- Use cooperative models when you care about how groups share surplus and which coalitions are stable in the long run.
Frequently asked questions
Are real‑world negotiations more cooperative or non‑cooperative?
Usually both. The legal and institutional environment determines which promises are enforceable. Contracts, regulations, and courts push you toward the cooperative side. Informal norms, reputation, and one‑off deals keep a large non‑cooperative component in play. A good model chooses which parts are realistically enforceable and which are just talk.
Is the Prisoner’s Dilemma always non‑cooperative?
In its classic form, yes. The whole point is that the prisoners cannot make binding agreements. But if you let them sign enforceable contracts before being separated, you’ve changed the game. Suddenly, they can commit to staying silent together, and the cooperative analysis becomes relevant.
Do cooperative games ignore strategy and timing?
They mostly abstract away from the detailed sequence of moves. Cooperative models focus on which coalitions can form and how to divide payoffs in a way that’s stable. If you care about who moves first, who proposes what, and how threats are made, you’re better off with a non‑cooperative bargaining model.
Is one approach “more realistic” than the other?
Not really. Each is realistic for certain questions. If you’re studying bidding in an auction, a non‑cooperative model fits better. If you’re analyzing how three firms might share the profits from a joint venture, cooperative tools are very natural. The realism comes from how well your assumptions match the actual rules and institutions in the situation you’re modeling.
Where can I read more about game theory from reputable sources?
For accessible introductions and course materials, university and public‑education sites are a good starting point. Many economics departments host lecture notes and problem sets on their .edu pages, and organizations like the National Science Foundation often highlight research projects that apply game theory to real‑world problems.
One question to keep asking: what can people really promise?
If you take nothing else away, keep this question in your back pocket whenever you face a strategic situation, from a puzzle to a real negotiation:
“What can the players actually promise — and what happens if they break that promise?”
If the answer is “not much,” you’re in a non‑cooperative world, where individual incentives and equilibrium strategies rule the day. If the answer is “quite a lot, with real penalties for defecting,” then you’re closer to the cooperative side, where coalitions and payoff‑sharing become the main story.
Once you see that divide, a lot of otherwise confusing behavior — in games, markets, and politics — starts to look a lot more organized than it first appears.
Related Topics
Smart examples of solving puzzles with logic (and why they matter)
Best examples of knapsack problem and its variants in puzzles and real life
Real‑world examples of minimax theorem in action
Real-World Examples of 3 Practical Examples of Nash Equilibrium
Why Some Games Reward Alliances and Others Punish Them
Explore More Puzzle and Game Theory Problem Solving
Discover more examples and insights in this category.
View All Puzzle and Game Theory Problem Solving