


“There are three kinds of lies,” Mark Twain famously wrote. “Lies, damned lies, and statistics.” But in the lively and engaging Everything is Predictable, science writer Tom Chivers seeks to lift the sagging reputation of statistics using the story and work of Thomas Bayes, an intriguing and underexplored 18th-century English preacher and mathematician whose ideas have gradually come to suffuse the field. Nowadays, Bayesian ideas inform DNA analysis, criminal justice, public health, insurance underwriting, artificial intelligence, scientific studies, and many other areas.
Bayes’s core contribution, which Chivers skillfully renders into cogent prose designed to educate the lay reader, is the notion that the likelihood of an event taking place in the future depends, in part, on prior events or assumptions. Specifically, the following equation can be described as “the probability of event A, given event B, equals the probability of B given A, times the probability of A on its own, divided by the probability of B on its own”:
But as Chivers points out, many contemporary studies rely on diametrically opposed logic as a means of verification. Instead of asking how likely a hypothesis is to be correct given the result of a study, as Bayes urged, they assess the probability that we would have seen the actual result if the theory were correct. As Chivers explains in depth, this “frequentist” approach, which rejected Bayes’s concepts, has bedeviled the field for centuries.
The book begins with a historical survey, and what elevates it far above a Statistics for Dummies how-to guide is its sensitive treatment of its core subject. Perhaps appropriately, as Chivers notes, “we can only say [Bayes] was probably born in 1701.” Because his family belonged to the disfavored Christian denomination known as “Nonconformists,” they were forced to conceal key biographical facts and, often, to worship in secret.
As a student at Edinburgh University, Bayes studied the theorems of giants like Pascal, Fermat, and Cardano. (Chivers adroitly walks the reader through complex statistical concepts such as normal distributions and Pascal’s probability triangle, along the way demonstrating their historical origins.) But he grew frustrated with the sampling probability approach — “How likely am I to see this data, given this hypothesis?” — prevalent in the field and pioneered an inferential model: “How likely is the hypothesis to be true given this data?” And to draw such an inference, Bayes concluded that we must take into account how strongly we held the hypothesis in the first place, which Chivers characterizes as our “subjective beliefs.”

In a classic example, imagine that someone rolled a ball across a pool table, drew an imaginary line where it came to rest, and picked up the ball. You then entered the room, rolled five additional balls along the table, and were told that two of the balls fell to the left of the imaginary line and three to the right. Where would you say the imaginary line was drawn? At first glance, you would assume it was most likely two-fifths of the way from the left of the table since two out of the five balls wound up to its left.
But Bayes found that the line was actually three-sevenths of the way from the left. But why? Because before even a single additional ball was rolled, you should have known that the imaginary line was most likely to be at the midpoint of the table since the midpoint represents the average position where the original ball would have wound up. And to account for this critical prior assumption, Bayes suggested adding a single additional ball on both sides of the imaginary line. Thus, after you rolled the five balls, there would now effectively be three on the left and four on the right, meaning the line was most likely three-sevenths of the way from the left. Bayes’s theories were posthumously published in “An Essay toward solving a Problem in the Doctrine of Chances,” including the key construct defined above — “without exaggeration, perhaps the most important single equation in history,” in Chivers’s telling.
Yet Bayes’s breakthroughs never fully broke through, partly because the “frequentist” school grew ascendant, as the bedrock metric for a study’s validity became how likely the unusual results would seem if the underlying theory were not true — the so-called “null hypothesis.” Chivers posits that the contemporary obsession with the “p-value,” the likelihood of the null hypothesis being incorrect, has fed the widely lamented “replicability crisis” in the social and hard sciences, first identified in the early 2010s. Instead, he argues, “If you want to measure how likely it is that your hypothesis is true, you simply cannot avoid priors. You need Bayes’ theorem.” Indeed, one reason Bayesian logic is more robust than its frequentist alternative is that it accounts for frequentist sampling.
But where do we get these priors from? And isn’t science supposed to proceed without preconceived notions? Actually, no, argue today’s Bayesians. In fact, most scientific studies build on their predecessors, rigorously testing and updating theories as new data pour in. Chivers frankly acknowledges that identifying useful priors isn’t a “trivial or obvious task,” and he concedes that some starting points — e.g., the concept of gravity — are virtually unassailable. But he persuasively demonstrates how “it doesn’t take all that much evidence to shift you away from even very, very strong priors.”
We use Bayesian logic all the time in real life. As evidence, Chivers cites the “gaze heuristic,” or the way that outfielders track fly balls, World War II fighter pilots monitor bombers, and sidewinder missiles intercept enemy aircraft by constantly, inductively adjusting their sights as the projectile nears. Philip Tetlock, the psychologist who has been tracking “superforecasters” for decades, found that predictors who use base rates and continuously update their findings have consistently outperformed non-Bayesians.
The brain itself, Chivers contends, operates in a Bayesian way. “The central thing the brain does,” he asserts, “is build predictions of the world, which it then integrates with information coming in via the senses.” Taste, sight, smell, hearing, touch, balance, and location — all of these provide real-time input into assessing a new situation, a process that begins with our priors. Imagine encountering white asparagus for the first time: Your visual priors suggest this curious vegetable is something new, but your senses of smell and taste overcome those priors and indicate it’s a different type of asparagus. Along the way, the brain’s dopamine system rewards successful perceptions and punishes failed ones.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Once you begin to examine life through a Bayesian lens, you appreciate just how much of our existence utilizes its principles. Everything from email spam filters to natural selection to epidemic monitoring falls under its sway. And even a notion as straightforward as the decreasing mental flexibility that comes with age — the older we get, our priors grow stronger, and the more powerful the evidence required to dislodge them — can be explained in terms of Bayes.
But perhaps the most important lesson we can draw from Chivers’s splendid book involves epistemic humility: Under Bayesian theory, hypotheses aren’t right or wrong, one or zero, or on or off. Instead, they enjoy confidence levels that fluctuate appropriately as the evidence pours in. Intellectual modesty, perceptive balance, and common sense: What’s not to like about the Rev. Bayes?
Michael M. Rosen is an attorney and writer in Israel and a nonresident senior fellow at the American Enterprise Institute.