Data vs theory: the mathematical battle for the soul of physics

Introduction

These are exciting times for the field of physics. In 2012, researchers announced the discovery of the Higgs boson, a discovery four decades in the making, costing billions of dollars (and euros, pounds, yen and yuan) and involving some of the best minds on the planet. And in December 2015, researchers at the Large Hadron Collider in Europe reported that two separate experiments have reported possible traces of a new particle, one that might lie outside the Standard Model, although much more data and scrutiny will be required before anything definite can be said.

Yet behind the scenes a far-reaching battle has been brewing. The battle pits leading figures of string theory and the multiverse on one hand, against skeptics who argue that physics is parting ways with principles of empirical testability and falsifiability that have been the hallmarks of scientific research for at least a century.

String theory

String theory, as physicist Brian Greene explains his book and TV show The Elegant Universe, posits that all matter and fundamental forces can be thought of as vibrating strings and “branes” roughly 10-33 cm in size (a billion billion times smaller than an atom). Most formulations of string theory inhabit a space of 11 dimensions; the reason we only see three dimensions of space and one of time is that the other dimensions are curled up to submicroscopic size, much as a garden hose looks one-dimensional from afar, because its circular cross-section is much smaller than its length.

String theory arose in the 1960s and 1970s, as a theory of hadrons and later also of fermions. In 1974 John Schwarz and Joel Scherk concluded that string theory could also be formulated as a theory of gravity, thus achieving the long-sought unification of gravity with the other fundamental forces and particles. Early efforts splintered into different directions, but in 1995 Edward Witten showed that five different formulations of string theory were really just different manifestations of a single, fundamental “M theory.”

The multiverse

One other player on the “pro” side of the debate is the multiverse. The multiverse arose out of efforts to explain some of the cosmic coincidences that suggest a fine-tuned universe. For example, if gravitation had been very slightly stronger in the early universe, the expansion would have stopped and even reversed long ago, ending the universe in a big crunch long before any sentient creatures would have arisen. On the other hand, if gravitation had been very slightly weaker, stars and galaxies might not have formed until matter was too dispersed, leaving the universe a cold and lifeless place.

Arguably the most striking example is the cosmological constant paradox. When one calculates the vacuum energy density of the universe, based on known principles of quantum mechanics, one obtains the incredible result that empty space “weighs” 1093 grams/cc, whereas the actual density is 10-28 grams/cc, a discrepancy of 120 orders of magnitude (often called the worst prediction in physics). There is a close connection between the vacuum energy density and the cosmological constant of Einstein’s relativity.

Physicists, who have fretted over this paradox for decades, have noted that calculations such as the above involve only the electromagnetic force, and so perhaps when the contributions of the other known forces are included (bosons give rise to positive terms, whereas fermions give rise to negative terms), all terms will cancel out to exactly zero. But these hopes were shattered with the 1998 discovery that the expansion of the universe is accelerating, which implies that the cosmological constant (and the vacuum energy density) must be slightly nonzero. But this means that physicists are left to explain the startling fact that the positive and negative contributions to the energy density cancel to 120-digit accuracy, yet fail to cancel beginning at the 121-st digit.

Efforts to demonstrate that string theory reduces to just one compelling theory have failed, in part because the underlying Calabi-Yau spaces have some 10500 different topological designs.

But Susskind and others see lemonade in lemons here, by proposing that the many Calabi-Yau spaces of string theory are the explanation of the cosmic coincidences — there really are 10500 different universes (a “multiverse”), and the reason the laws of physics appear so finely tuned for intelligent life in our universe is simply that if they weren’t, we wouldn’t be here to talk about it (a line of reasoning often termed the anthropic principle). In other words, with so many universes to choose from, inevitably one of them (ours) beats the 1-in-10120 odds and is intelligent-life-friendly.

Detractors

But these eminent theoreticians also have their detractors, who argue that the field of physics can no longer afford to pursue speculative lines of research that, as far as anyone can see at the present time, cannot be empirically tested. String theory has yet to produce any prediction that can be subjected to empirical test (e.g., to predict the masses of current or yet-to-be-discovered particles), and the multiverse may be fundamentally beyond the realm of empirical test. Even the inflation theory of the big bang has been criticized of late as no longer scientific because it is so flexible that it can accommodate any observational result.

With regards to string theory, Peter Woit of Columbia University, author Not Even Wrong, writes,

The possible existence of, say, 10500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything. If one picks among this large set just those states whose properties agree with present experimental observations, it is likely there still will be such a large number of these that one can get just about whatever value one wants for the results of any new observation.

On 16 December 2014, George Ellis (co-author with Hawking of The Large-Scale Structure of Space-Time) and Joseph Silk (author of The Infinite Cosmos) jointly wrote a Nature article decrying developments in string theory and the multiverse, warning that recent debates in physics have taken a “worrying turn.”

They note that proponents of string theory and the multiverse, faced with the failure in applying these theories to the real observed universe, have begun to argue that empirical testing should not be required — if a theory is sufficiently “elegant” and free of internal contradictions, that should be good enough to pursue it (i.e., walking away from the requirement, paramount since the writings of Karl Popper in the mid 20th century, that a theory must be empirically falsifiable to qualify as a scientific theory).

Ellis and Silk are particularly concerned about those, such as Richard Dawid and others who have of late begun to use Bayesian statistical analysis involving purely philosophical propositions, i.e., because no one has found a good alternative to, say, string theory, and because the theories without alternatives in the past have tended to be viable, that these facts should be taken as “evidence” in support of string theory.

In the end, Ellis and Silk conclude,

To state that a theory is so good that its existence supplants the need for data and testing in our opinion risks misleading students and the public as to how science should be done and could open the door for pseudoscientists to claim that their ideas meet similar requirements. … The imprimatur of science should be awarded only to a theory that is testable. Only then can we defend science from attack.

This debate came to a head at a recent workshop held in Munich, Germany. Nobel laureate David Gross urged that we engage philosophers in these discussions. Dawid defended his Bayesian approach. And Ellis and Silk emphasized the dangers of the relaxed approach.

Whither science?

The present authors concur with Ellis and Silk that one must draw the line on empirical testing and falsification. As fond as we are of mathematics in general and elegant mathematics in particular, such considerations should be kept out of physical theories. The space of mathematical structures is simply far too rich and vast for one to think that string theory, for instance, is “the only game in town.”

For example, Ptolemy’s system of spheres and epicycles was almost universally regarded as both elegant and self-evident for 1500 years, yet it fell to the modern cosmology of Copernicus, Galileo, Kepler and Newton. Indeed, early predictions of the new theory were less accurate than the highly tuned predictions of the old one. Likewise, Newton’s physics was considered both elegant and self-evident for 300 years before it fell to relativity and quantum mechanics.

In each case the paradigm shift required a mixture of new and compelling theory and sooner-or-later supporting observations. Inarguably, string theory has the first but not the second. How is that different from natural philosophy or science in its original and now deprecated sense?

It is also ironic that in an era when mathematics itself is becoming more experimental, with experimentation — by computer — now considered fully complementary with rigorous proof, that theoretical physics is becoming more focused on pure mathematical theory and less on experimental evidence.

The downsides

We should keep firmly in mind that attempts to dilute the strict requirements of experimental science, or to allow “philosophical” or other non-technical grounds to hold sway, have led to disasters such as:

  • 60 years after the advent of radiometric dating, which has incontestably established that the earth and its fossil layers are many millions of years old, at least 30 million Americans still believe the earth to be just a few thousand years old.
  • 150 years after Darwin, in an era of full-genome DNA sequencing that has provided overwhelming evidence of common ancestry of related species, at least 75 million Americans question whether species on earth today are the product of evolution.
  • Just weeks after the December 2015 Paris meeting on climate change, and in the wake of at least 15 years of scientific consensus that the earth is warming due at least in part to human activities, seven current candidates for the U.S. presidency either deny climate change altogether, or deny that humans have significantly exacerbated it.
  • Despite endless assurances, for example by the U.S. Center for Disease Control, that vaccinations are not a cause of autism or other childhood diseases, more than half of the U.S. public either are sure that they do cause autism, or are not sure.
  • Friend-to-friend marketing of essential oils is exploding in popularity, both in the U.S. and internationally. Among the utterly unsubstantiated health claims that have been made are that essential oils cure Ebola, bacterial infections, cancer, brain injury, autism, endometriosis, Grave’s Disease, Alzheimer’s Disease, tumor reduction, and ADD/ADHD.

In short, we concur with Ellis and Silk that the only way to keep these and numerous other pseudosciences at bay is to hold fast to the high ground of empirical testing.

Along this line, it is hard to resist the conclusion that the eminence of some of the proponents has given super-string theory a ‘free-pass.’

This does not mean that all research in string theory and the multiverse must stop. But the practitioners of these fields should recognize that the chips are down: they cannot exist much longer as science if they cannot at least establish some crisp, testable connections with the real world of scientific data and analysis. They should not be given a free pass for all time.

Comments are closed.