Notes - talk by Bo Waggoner, Governance Games at Edge Esmeralda, June 5, 2025.
Abstract. This talk discusses mathematical foundations for how groups make decisions. From my particular perspective on the problem, it lays out a framework for thinking about the problem and summarizes what we know and what we don't, according to the microeconomics and algorithmic game theory literature. I'll highlight places where research is needed to bridge to gap to governance needs in practice. I'll discuss my recent paper with Mary Monroe, Public Projects with Preferences and Predictions, as a case study.
This talk is about governance, but that happens over time in a system. Really, this talk focuses in on the point of time in a governance system when we need to make a well-defined decision between well-defined alternatives. The governance process for getting to that point may be more important than the process I discuss here, but this is important too.
So I will mostly consider a setting where we have a set of agents (participants, decisionmakers) and a set of alternatives (candidates, options). The participants need to collectively decide on one alternative. As I alluded to, this is a moment that happens as part of a larger ecosystem.
This talk will be a lightning summary of what we know, or don't, mathematically, about this problem. It is highly biased toward the literature and research community I know, algorithmic game theory and microeconomics. It's based on what we can prove theoretically. A theory proof doesn't guarantee something will work, but it can be a proof of concept or show why an "elegant idea" is elegant. I think of it as drawing a stick figure of the bridge and checking whether the bridge will fall down. If you go into practice, all the messy details can sometimes obscure the fact that the supports you designed can't actually hold the thing up. So let's draw the stick figure and see.
The talk is organized around these four topics:
If a group needs to make a decision, one paradigm is preference aggregation: we need to find a compromise between what different people want. Literature on this is very classical, and also active today.
Most literature treats preferences as innate and unchanging. Roughly, we can put changeable preferences in the category of "information" below.
Two key models for preference aggregation are (1) cardinal transferable utility -- "settings with money" -- and (2) ordinal utility -- "no money allowed".
So aggregating preferences is really hard. However, a common contention in this community is that often, the preferences of the community are relatively aligned, and therefore the primary challenge in our decisionmaking contexts is to aggregate information.
So, a framework is to say that preferences should guide high-level vision and agenda-setting, whereas information should guide specific decisions. This is the idea behind futarchy, as a primary example. We can also view that as the idea of representative democracy, where an infrequent vote registers voter preferences, and then our elected "experts" implement the preferences using their wisdom and competence.
A completely different paradigm for decisionmaking is information aggregation: we need to collectively figure out what decision is (objectively) best.
This literature is much less developed. I'll focus on what my field considers the standard model of information, Bayesian models of uncertainty about a future event of interest following the laws of probability. In these models, the future event $Y$ is the thing we all care about. It is impacted by the decision, so e.g. there is a future social welfare $U(k,y)$ that depends on the alternative $k$ and on the event realization $Y=y$. The better we can predict $Y$, the better we can choose $k$.
Each agent has some private information. There is a Bayesian posterior on $Y$ conditioned on an agent's private information, and a more accurate posterior conditioned on multiple pieces of information, etc.
I highlight two useful archetypes for how information is distributed.
If you know you are in a wisdom-of-crowds case, there are potentially a lot of information aggregation mechanisms you could try, such as asking everyone for a prediction and averaging them. But if you use those in an expert-knowledge case, they will perform very poorly. If you are in an expert-knowledge case and you don't know who the experts are, this problem is challenging. We think prediction markets generally solve it, but other mechanisms don't.
I also highlight two useful archetypes for how information is structured.
Substitutes are generally easy and tractable to aggregate. A wisdom-of-crowds scenario could be considered an extreme case of substitutes. Complements can be very hard, and in extreme cases, we don't necessarily have any reliable mechanism at all. However, we often believe that no one person holds a special piece of information that nobody else can access. If there is competition to supply information because multiple people have it, the complements problem is probably less severe. In general, if information can be elicited and aggregated at all, we think a prediction market can do it. Other mechanisms don't tend to be as general-purpose.
As a side note, I mention "jury theorems" and statistical interpretations of voting rules. These results assume a very strong and particular wisdom-of-crowds information structure, and show that voting rules produce accurate outcomes. This work has even been extended to delegated voting and liquid democracy type settings. But to be blunt, I don't think these models are relevant to any interesting problem in the real world. In the real world, I think we tend to be in scenarios where people get their information from largely the same sources, or where quality of information is mostly determined by expertise. These jury theorems may be better at capturing preferences than information.
In terms of theoretical guarantees, the best mechanisms we have for information aggregation are prediction markets. They incentivize agents to accurately aggregate information in a very broad range of information structures, and they don't need to be designed with knowledge about the information structures. If we're not using prediction markets, we have problems:
For example, in a small, motivated group, you might solve these by having a discussion and coming to an informal consensus.
Aggregating information for decisionmaking is even harder, at least when we have incentive concerns. Since our information aggregation mechanisms seem to need to be prediction markets, our decisionmaking mechanisms can be "decision markets": run a prediction market for predicting an objective (e.g. social welfare) conditioned on each alternative we could choose, and choose the alternative that is expected to maximize it. (Observe the outcome and pay out the market; but cancel all the trades in markets where we didn't take the decision and never observe the outcome.)
There are a number of incentive challenges here already, e.g. we need to use randomized decision rules to ensure good incentives, which can decrease the quality of our decisions.
However, the main drawbacks here are assumptions that information providers already have information available (no effort needed) and do not care about the decision (no incentive misalignment).
Let's talk about governance. (This section is based on my blog post.) Why not direct democracy? For that matter, why don't corporate shareholders directly vote on the day-to-day operational decisions of the company?
One key reason may be effort: it takes resources, e.g. time, to figure out what decision to make. So, we need to hire people with the right expertise to spend the time to make these decisions well.
We have seen this evolution in DAOs with delegated voting, where we went from a dream of direct democracy, to essentially a class of professionalized delegates who have significant expertise and spend significant effort on governance decisionmaking. This mirrors the development of representative democracy, and mirrors corporate structures where shareholders hire a board of directors to manage the company for them (and there are many layers of hiring and delegation below the board).
In my field's literature, while there's some research on effort in gathering information, there's not a lot, and very little when it comes to decisionmaking (as opposed to forecasting).
Some proposals, like liquid democracy, don't really seem compatible with problems where effort to gather information is a key component. On the other hand, prediction markets are highly compatible with effortful settings, because if the market rewards are subsidized or scaled up enough, there is a high incentive to exert effort, find good information, and trade on it.
But it's important to notice that markets aren't a free lunch. Financial markets tend to aggregate information for free because there are lots of "sheep" who want to trade for reasons unrelated to information, such as investing for retirement or hedging risk, so lots of "wolves" show up to trade against them and aggregate information into prices. But in a prediction market, who is subsidizing the wolves? You probably need to subsidize the market. Which is fine; you're paying for information, and one way to do that would be to hire an expert; this is another way. But it can also require thought to design the market so that money is spent efficiently.
We want to make good collective decisions. Because that requries good information, we often need expertise and effort. This generally naturally leads to systems of representatives/delegates/employees who gather information and use expertise to implement our agendas. But this creates a principal-agent problem, discussed next.
When we move from direct democracy to representative democracy, incentive alignment becomes a big problem -- this is true in political settings, and also corporate settings. The decisionmakers' incentives are not, by default, aligned with their constituents'. The decisionmakers may in particular make decisions that give more money and power to the decisionmakers, and so on: capture. (See the blog post.)
The first time I found blockchain really interesting was when I realized that it was asking the question: how does an organization avoid capture?
I want to mention the purpose of game-theoretic tools, like assuming agents are Bayesian and rational and studying the equilibria. I view incentives as a source of gravity that are placed in a landscape. We wanto to design systems where good things happen at the gravity wells. This is not a guarantee that the system will fall into the gravity well or even tend toward it. Maybe people will be systematically non-Bayesian, "irrational", etc. over the long run (though if you assume they definitely will be, we would expect a good justification and prediction of how exactly). But you generally at least want the gravity well location to be compatible with what you're trying to accomplish.
We have settings where the agents have preferences and, potentially, information. But in general, collecting and aggregating information takes time and effort, so we may need professionals to do this on behalf of the agents. But we have incentive alignment issues for the experts.
Here is a mechanism for decisionmaking in a particular setting that exhibits these features, from my joint work Public Projects with Preferences and Predictions.
Under strong, but standard assumptions -- quasilinear utility (meaning we're in a monetary setting), Bayesian rationality, etc. -- we can prove welfare guarantees for this mechanism! With two alternatives, the price of anarchy approaches 1 as the size of the voter group grows relative to any one voter.
Thinking a bit more about how this mechanism navigates the design space, it has these features:
In much of our literature on eliciting information and effort, particularly the "peer prediction" literature, we have found that short-term incentives tied to one particular task are much less effective than the long-term incentive of reputation.
Reputation is highly efficient, because it allows us to save on a lot of effort of vetting and verifying on a task-by-task basis. It is most efficient to hire an expert with a good reputation, have them make decisions for a year, then evaluate their performance and decide whether to fire them (contributing to their reputation). Compare this to running a prediction market for every decision for a year and subsidizing that market enough. However, when we rely on reputation rather than guaranteeing good incentives on each decision separately, principal-agent problems become a big deal.
When designing a decisionmaking mechanism, think about at least these factors:
In all below cases, more references exist and only some main/representative ones are given; one tool to explore more, beyond a paper's related work section, is to search e.g. Google Scholar for papers that cite this paper.
Preferences
Information
Effort
Incentives