When a news story breaks and the price on a Kalshi or Polymarket contract snaps to a new level within seconds, something is being described to the public as market-based truth discovery. A crowd of informed participants, so the story goes, is pooling what they know, and the resulting price is a more accurate reflection of reality than any poll, pundit, or expert forecast. That story is the foundation of a growing industry. It is also the foundation of the Commodity Futures Trading Commission’s decision to treat event contracts on these platforms as derivatives rather than as gambling, which in turn strips away most of the consumer protections that state gambling law would otherwise require.
The story is misleading in an important and underappreciated way. The price that snaps to a new level after news breaks is only partly the output of the wisdom of a crowd. Most of the movement reflects a narrower process: a small number of sophisticated participants updating their quotes, or picking off stale orders left behind by retail participants who have not yet reacted. The difference between these two descriptions sounds technical. In practice it is the difference that determines whether the current regulatory framework rests on a sound foundation.
This piece is about that gap, and about why the regulatory structure built on the first description does not survive contact with the second.
How a Prediction Market Price Actually Forms
Start with what happens when news hits. Suppose a contract on Kalshi pays one dollar if the Federal Reserve cuts interest rates at its next meeting, and zero dollars otherwise. The contract is trading at sixty-two cents, meaning the market is pricing the cut at roughly a sixty-two percent probability. A Fed official gives an unscheduled speech that shifts the balance of expectations. Within seconds, the price moves to seventy-one cents.
Who caused that move? Not the crowd. Almost certainly, the move was accomplished by two or three actors working in milliseconds, and to see why, it helps to understand what is actually happening on the exchange at that moment.
A prediction market, like a stock exchange, works through an order book. At any given time, the order book contains standing offers from participants willing to buy at certain prices and others willing to sell at certain prices. The price you see quoted on the app is simply the most recent trade between one buyer and one seller whose orders matched. Nothing happens until two orders match.
Most of those standing offers are not posted by humans. They are posted by market-making algorithms run by professional trading firms. These algorithms continuously update the prices at which they are willing to buy and sell, based on proprietary models of what the contract should be worth. When the Fed official gives the unscheduled speech, one of these algorithms reads the transcript within milliseconds, recalculates what it thinks the contract is worth, and cancels its old offers to post new ones at the new price.
But not every order on the book is run by an algorithm. Retail participants have also placed standing orders, based on the prices that prevailed before the speech. Those orders are now stale. They reflect a view of the world that is seconds out of date but is still sitting on the book, still available to trade against. A second algorithm at a different firm notices the stale retail orders, executes against them at the old price, and instantly captures the difference between the old price and the new one. That capture is risk-free profit, and it is paid for by the retail participant whose order was picked off before they knew the news had broken.
By the time the retail participant opens the app to react, the price has already moved and the opportunities created by the news have already been harvested. The participant sees only the new price, and assumes the market has efficiently absorbed the information. In a narrow sense it has. But the absorption was accomplished by a small number of sophisticated firms extracting value from slower participants, not by a crowd collectively updating its beliefs.
This is not a description of manipulation. It is a description of how modern electronic markets work. On any active prediction market contract, prices are set by four groups, in roughly this order of importance.
Market-making algorithms at prop trading firms like DRW, Susquehanna, Citadel, and Jane Street post two-sided quotes based on proprietary models. On liquid contracts, they set most of the price most of the time. The price you see reflects their models, not a weighted consensus of public opinion.
Informed traders with non-public information about the underlying event trade aggressively when they know something the market does not. A campaign staffer who has seen internal polling, a military source who has seen classified intelligence, a sports team employee who has seen an injury report, or a quant with a proprietary election model all push the price toward what they know. This is adverse selection in its textbook form, and Robin Hanson, the leading academic proponent of prediction markets, has written approvingly about it as a feature rather than a problem.
Retail participants buy and sell based on headlines, partisan conviction, hunches, and whatever narrative is dominant on social media at a given moment. Retail flow tends to move prices away from fundamental value, which is what creates the opportunities that the first two groups harvest. Patrick Boyle made the point crisply in his recent video on these markets: a retail participant who takes a position on a geopolitical event based on intuition is trading against a gamma-neutral algorithm run by a multi-billion-dollar hedge fund. That is not a skill gap that a retail participant can close by doing more homework. It is a structural asymmetry between two fundamentally different kinds of market participant.
It is worth pausing on what “gamma-neutral” means, because the phrase does real work in describing the asymmetry. A gamma-neutral trading strategy is one that has been hedged so that the firm’s profits do not depend on whether the underlying event actually happens. The firm makes money from the spread between buying and selling, from small pricing inefficiencies, and from picking off stale orders, but it has no stake in the outcome itself. The retail participant on the other side of the trade has exactly the opposite posture. That participant has a view, has committed capital based on the view, and will win or lose depending on whether the event occurs. One side cares about the outcome and has conviction; the other side is indifferent to the outcome and has mathematics. Over enough trades, the side with mathematics wins.
News events cause the abrupt price movements that make prediction markets look responsive and informed. Almost all of that movement is accomplished by bots reading news feeds within milliseconds. The retail participant, even a sophisticated one, is almost never the first mover.
What emerges from this structure is the price. There is no weighting function, no aggregation algorithm, no consensus in the ordinary sense of the word. The price is whatever the last buyer and last seller agreed to. Who those two parties were, and why they traded, determines what the price actually represents.
Why the “Truth Machine” Story Persists
The theory that prediction markets aggregate dispersed information into accurate forecasts descends from Friedrich Hayek and was formalized for binary event contracts by Hanson in the late 1990s. The argument is elegant: participants with diverse information trade against each other, those with better information bet more, prices move toward accuracy, and the market becomes a self-correcting forecasting instrument. The theory has animated two decades of academic research and several government experiments.
It has also animated the CFTC’s willingness to defend event contracts as legitimate price-discovery mechanisms in federal court, most recently in the Kalshi litigation in the Ninth Circuit. The truth-machine story is not doing work everywhere in the regulatory framework, but it is doing work at the precise point where everything else turns: whether these instruments qualify as derivatives subject to federal oversight rather than as gaming products subject to state law.
Academic finance, however, has known since 1980 that the strong version of the story cannot be right.
Sanford Grossman and Joseph Stiglitz proved that prices cannot perfectly reflect all available information, because if they did, no one would bother to gather information, and if no one gathered information, prices could not reflect it. The only sustainable equilibrium is one in which prices are partially inefficient, with the inefficiency large enough to compensate informed traders for the cost of acquiring information. The implication for prediction markets is direct. The fact that Susquehanna is hiring traders specifically to “detect incorrect fair values” is itself proof that prices are not efficiently aggregating information. Those incorrect fair values represent money that is about to flow from whoever posted them to whoever identifies them first.
Albert Kyle’s 1985 market microstructure model describes the mechanism more specifically. In Kyle’s framework, markets contain insiders with private information, noise traders whose orders are uncorrelated with fundamentals, and market makers who set prices based on order flow. Insiders earn profits by exploiting private information. Noise traders provide camouflage that lets insider orders be filled without immediate price impact. Market makers earn a spread in exchange for absorbing inventory risk.
This is the structure of a commercial prediction market. The insider is the campaign staffer, the military source, the sports team employee, or the quant with a better model. The noise traders are the retail participants chasing headlines. The market makers are the prop desks at DRW and Susquehanna. Kyle’s framework is not a criticism of such markets. It is simply a description of how they work. The insight that matters here is that the resulting price is not a consensus estimate. It is the byproduct of a process in which information and capital flow from noise traders toward insiders, intermediated by market makers who take a cut.
The Regulatory Mistake
All of which brings us back to the CFTC.
The preemption of state consumer protection law over these markets works in two steps, and it is worth separating them because the truth-machine theory only carries weight at one of them. The first step is statutory. Once an instrument is classified as a Commodity Exchange Act contract traded on a designated contract market, federal law displaces state law as a matter of the statute’s text. Courts have read that preemption broadly. A retail participant on Kalshi cannot invoke state gambling law for the simple reason that, once the federal classification is made, the state law claim is preempted.
The second step, and the one where the truth-machine theory does real work, is the classification itself. The CEA contains a specific provision, Section 5c(c)(5)(C), that prohibits event contracts involving gaming, terrorism, assassination, war, and activity that is unlawful under state or federal law. When Kalshi and Polymarket argue that their contracts fall outside the gaming prohibition, the argument they advance, and that the CFTC has largely accepted, is that these products are not gaming because they produce valuable information about real-world events. Price discovery is what distinguishes an event contract from a sportsbook. Without that distinction, a contract on who will win the 2028 election looks identical in function to a wager placed at a licensed bookmaker.
That is where the theory is load-bearing. If prediction markets are not actually aggregating information in the way advocates claim, but are instead redistributing it from less-informed to more-informed participants, the price signals they produce are not the kind of public good that separates a derivative from a wager. The non-gaming classification becomes much harder to defend. And once the classification becomes harder to defend, the statutory preemption that flows from it becomes harder to defend as well. The preemption is not conjured by the theory, but the classification on which the preemption depends is.
The same pattern appears in the CFTC’s broader public-interest and hedging-utility analyses. The more the Commission describes these products as instruments for price discovery and risk transfer, the more the justification for federal treatment depends on the information-aggregation framing. Strip away the framing, and the forecasting and hedging rationales that make these contracts look like derivatives rather than wagers become thin.
A recent commentary in Science by Nizan Packin and Sharon Rabinovitz makes a parallel point from a public health direction. The authors argue that commercial prediction markets have evolved into gamified platforms engineered for engagement, that their thin liquidity allows even modest trades to move prices substantially, and that the resulting price signals feed back into public belief through media coverage in ways that can manufacture the appearance of consensus. The democratic-integrity concerns in their piece and the market-structure concerns in this one are two sides of the same coin.
What Retail Loses
The consumer protection framework for gambling developed over decades around a simple observation: many participants lose money faster than they intend to, and many continue to participate anyway. State licensing regimes, self-exclusion lists, loss limits, advertising restrictions, and age verification exist because sophisticated counterparties with information or model advantages will otherwise extract value from unsophisticated ones until the unsophisticated ones are depleted.
The federal derivatives framework assumes something fundamentally different. It assumes that counterparties are sophisticated actors engaged in hedging or price discovery, and that the regulator’s role is to ensure market integrity rather than to protect participants from their own decisions. When the CFTC treats event contracts as derivatives, it imports this framework wholesale. A retail participant on Kalshi is treated as a counterparty in a financial contract, not as a consumer of a gambling product. The federal classification displaces most of what state consumer protection law would otherwise require.
If the truth-machine theory were correct, this trade-off might be defensible. A modest reduction in consumer protections in exchange for a socially valuable forecasting instrument could be a reasonable policy choice. If the theory is wrong, the trade-off is not defensible. Retail participants receive less consumer protection than they would at a licensed sportsbook, in exchange for participating in a market whose primary function is to redistribute their capital to professional firms.
The Question
This is not an argument against the existence of prediction markets. It is an argument against the specific intellectual foundation on which the current regulatory treatment rests. If these markets are what their advocates claim, they should be able to withstand a more accurate description of how prices are actually formed. If they cannot, the classification should be revisited.
At a minimum, the burden should shift. The CFTC, having defended the derivatives classification in federal court, should be asked to demonstrate that the price-discovery benefits it cites are real, measurable, and proportionate to the consumer protections being displaced. The market microstructure literature gives it little to work with. The empirical record gives it less.
America’s great truth machine may in fact be reinventing Paddy Power, as FT Alphaville recently put it. It will do so with none of the consumer protections that the United Kingdom built around Paddy Power over decades, and under a legal theory that assumes the existence of an information-aggregation mechanism that the underlying economics do not support.
That is a regulatory bet worth questioning before the retail losses come in.
Sources:
Sanford Grossman and Joseph Stiglitz, On the Impossibility of Informationally Efficient Markets, American Economic Review 70 (1980).
Nizan Geslevich Packin and Sharon Rabinovitz, Prediction markets as a public health threat, Science (April 2026).
Justin Wolfers and Eric Zitzewitz, Prediction Markets, Journal of Economic Perspectives 18 (2004).
https://www.youtube.com/watch?v=e0nsou-1Q2k
De Silva Law Offices, LLC is a Chicago-based boutique firm specializing in CFTC and NFA regulatory matters, securities law, derivatives, and event contract compliance. The firm advises platform operators, introducing brokers, commodity pool operators, commodity trading advisors, and individual market participants on the regulatory architecture of prediction markets and related derivatives, including designation and registration, contract review, enforcement defense, and regulatory classification analysis