Skip to content

From the archives

That Ever Governed Frenzy

Through the eyes of Jody Wilson-Raybould and Michael Wernick

Rumble on Parliament Hill

In the ring with Justin Trudeau

Return of the Robber Barons

Chrystia Freeland asks if we can tell “makers” from “takers” among the new super-rich

An Imperfect Truth

Immutable laws can’t capture the universe’s messy complexity

David Orrell

The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy

Roberto Mangabeira Unger and Lee Smolin

Cambridge University Press

566 pages, hardcover

ISBN: 9781107074064

When the Perimeter Institute—a physics research institute in Waterloo, Ontario—decided several years ago to build an extension, they asked the architects “to provide the optimal environment for the human mind to conceive of the universe.” Clearly the results were effective. One of the founding faculty members, Lee Smolin, and the Harvard philosopher Roberto Mangabeira Unger have conceived of the universe, and have come to a number of conclusions about it, including that a) there is only one of it, and b) time is real, so all things change, mutate and get old.

To non-scientists, these main points, summed up in the book’s title, might seem unremarkable. We only have experience of one universe, so would it not be presumptuous to assert that there are more? Time is clearly real enough (just ask an editor). And Heraclitus pointed out a long time ago that “everything flows.” So how can the authors assert that the admittance of mutability “is astonishing in the reach of its implications”?

The reason for this lofty claim is that, when time is taken seriously, we need to relax the commonly held assumption that the universe is governed by strict mathematical rules based on immutable symmetries. This mathematical version of reality—the product of generations of scientists—is only “a proxy for our world, a counterfeit version of it, a simulacrum.”

The book, which weighs in at more than 500 pages, is a kind of one-two punch aimed at scientific orthodoxy. The first part is written by Unger, and lays out the philosophical framework, exploring the nature of time, and showing how causality can exist without the explanatory crutch of mathematical law. The second part by Smolin concentrates on cosmological models and proposes a research agenda. There is also a section at the end where the authors hash out their differences—Unger coming across as slightly more radical in his opinions, and less interested in reaching for tidy solutions such as Smolin’s “cosmological natural selection” (imagine Darwin’s version but with black holes as mothers, and no sex) to explain the evolution of the universe.

Drew Shannon

The book is best understood as a reaction to a long-dominant belief in science that the universe can be reduced to numbers and precisely represented—in principle at least—by a fixed set of mathematical equations. The authors identify this approach as the “Newtonian paradigm” although its roots and inspiration go back to Greek philosophy. (The Pythagorean sect of the 6th century BC went even further than most modern scientists, in that they believed the universe was actually composed of numbers.) The Greeks built sophisticated models of the cosmos, in which the heavenly bodies, encased in crystalline spheres, moved in perfect mathematical orbits around the Earth. This picture went almost unquestioned until the Renaissance when Copernicus proposed that the Earth might go around the Sun, rather than vice versa. In his 1687 Principia, Isaac Newton combined Kepler’s theory of planetary motion, with Galileo’s study of the motion of falling objects, to derive his three laws of motion and the law of gravity.

Rather than being stuck in fixed orbits, the universe was therefore seen as obeying dynamical laws. However, as the authors point out, the laws themselves were now perceived as fixed and eternal. Instead of crystalline spheres, we had crystalline mathematical equations, which lived like Plato’s forms outside the corporeal world. Time was treated as a kind of inert background, which was assumed to flow in a particular direction, but otherwise had no direct connection with reality. The universe was a machine, whose parts unfolded according to strict rules.

This Newtonian, mechanistic model supplied a program that scientists have followed until the present day, to enormous success. However, there were a number of problems lurking in the mathematical firmament. One puzzling feature of the Newtonian model—which sees the world “through the lens of timeless laws of nature”—was that none of those laws said that time should flow in a particular direction. Indeed, one of the great appeals of mathematics is exactly that it exists outside of time. Numbers are golden: they never change or decay.

In the early 20th century, a challenger to Newtonian theory appeared with Einstein’s development of quantum physics and relativity theory. The first implied that nature was inherently uncertain, described only by probabilistic wave functions. This presented a number of difficult philosophical questions, such as what a probabilistic wave function is made of (ether?), but at least it preserved the idea that the universe could be modelled by mathematical equations. It was just that now, the equations described probabilities instead of certitudes. Relativity theory, meanwhile, handled time by treating it as something like an extra dimension of space.

Both theories therefore modified the Newtonian, mechanistic world view, while leaving the core assumptions, that nature is governed by immutable laws, intact. One problem was that they appeared inconsistent with one another—apply relativity theory at quantum scales, as in a black hole, or for that matter the universe’s kick-off Big Bang event, and the equations blow up too. One of the cherished goals of science is a unified theory, and it was therefore assumed that these inconsistencies would eventually be resolved in a grand theory of everything.

After the war, some progress was made toward this goal as physicists managed to extend quantum theory to a near-complete description of all the known particles of nature under a single mathematical framework. As with the Greek model of the cosmos, the so-called standard model was based on the idea of spheres, although here the spheres were in the abstract plane of mathematical group theory. The crowning glory of this model was the discovery at the Large Hadron Collider of the Higgs boson, which was the model’s last missing piece. However, the standard model still had numerous problems. It contained about 30 arbitrary constants, describing things such as the strength of forces, whose values had to be precisely set in order for the theory to work (the so-called fine-tuning problem). It was based on pristine mathematical symmetries, but the actual implementation was baroquely complicated. And it had nothing to say about the force with which we have the most direct experience—gravity. Fortunately, there was another, even more elegant and symmetric theory waiting in the wings which appeared to offer a solution: supersymmetric string theory.

String theory models particles such as electrons as not being particles at all, but instead as very small strings oscillating in a ten-dimensional space. The tuning of the string, i.e., the nature of its oscillations, is what distinguishes particle types. As a kind of unexpected bonus, the theory threw out a prediction that every type of particle should have a supersymmetric partner. It was originally hoped that some of these new entities would be detected at the LHC, but so far no luck.

While string theory appeared to be a candidate for the famous theory of everything, it too had its flaws, and they were big (see Smolin’s The Trouble with Physics: The Rise of String Theory, the Fall of a Science and What Comes Next). One was a complete lack of empirical evidence for higher dimensions, new particles or anything else predicted by the theory. Another was that the number of unknown parameters exploded. It seemed that string theory was less a theory than a collection of possible models. The vast majority of these models would not describe a universe anything like our own. It was the fine-tuning problem with bells on. Instead of admitting defeat, though, theorists decided to take this multiplicity at face value. Maybe ours is just one among a great number of universes—we are talking 10 to the power of 500 or even infinitely many—most of which would never support life, or much else. The flexibility of string theory would be turned from a problem into an asset, and “convert the explanatory embarrassment into explanatory triumph.”

Enter the multiverse. Instead of one Big Bang, there is actually a continuous sequence of such events, with each universe running a different version of the model. According to perhaps its most extreme version, the Ultimate Ensemble theory of the MIT cosmologist Max Tegmark, each universe is not just characterized, but is actually defined by a unique set of mathematical equations. Conversely, any set of mathematical equations can have a corresponding universe. Number is the universal medium, just as Pythagoras said, and ours is just one of the stories written on it.

Of course, the idea that there are a huge number of universes presents something of a slippery slope. The Oxford philosopher Nick Bostrom pointed out that at least some of those universes would have spawned civilizations that are capable of making a simulated universe, as in the Matrix films; and because simulations are cheap and easy to reproduce, it is likely they would have made multiple copies, as one does. So, logically, there should be many more simulated universes than ordinary ones. There is a “significant probability,” according to Bostrom, that “you exist in a virtual reality simulated in a computer built by some advanced civilisation.”

On the surface, with its embrace of multi­plicity—extra universes, extra particles, extra dimensions on the side—and total lack of supporting evidence, multiverse theory appears to be a radical departure from normal mechanistic science. However, in many respects it is better seen as a continuation of that theme. It still views the universe as being described by a single, fixed set of mathematical equations—a unique theory of everything. It is just that now, there are a lot of everythings. It unifies the equations, which in this view are the important thing, at the expense of making multiple copies of reality. As one of its admirers Richard Dawkins wrote, “the multiverse may seem extravagant,” but if “each one of those universes is simple in its fundamental laws, we are still not postulating anything highly improbable.”

Multiverse theory is not universally popular (at least in this universe), but even its critics have usually tried to maintain the idea of a unified theory of everything based on mathematics. It is against this foundational belief, which stretches from the ancient Greeks to the LHC, that Unger and Smolin direct their aim. If time is taken seriously, there is no reason to believe that the universe is governed by immutable laws, since any such laws would be part of the evolving universe and so would themselves be in flux. Rather than “seek in mathematics what we have so far failed to find in nature,” a historical approach is required: “the history of the universe witnesses the occasional emergence of new structures, new phenomena, and new forms of change.”

The problem of the fine-tuning of model constants, for example, can be addressed by assuming—not that they are set randomly in different universes, with us being fortunate to live in a “biofriendly” result—but that the constants evolve over time, so they are not really constant. Perhaps gravity is not what it used to be; perhaps it is more complex than we thought.

According to Unger and Smolin, models are best seen as imperfect patches that fit one portion of a complex reality in a “provisional and approximate” fashion, rather than as the basis of reality. Their central thesis therefore addresses the old question of whether mathematical laws are human inventions or exist independently of us, and parallels Unger’s legal scholarship, where he argues that laws are embedded in, and a product of, social life. We are not machines or “gadgets” ruled over by abstract law, and neither is the universe.

Allowing laws to change means that fundamental problems, such as those that surround singularities such as the Big Bang, can be addressed in a new way, and Smolin’s section suggests some ideas along those lines. Of course, critics will argue that, rather like multiverse theory, this just introduces more flexibility, which makes the theory difficult to falsify. The debate is therefore set to run for a long time.

While I found the main thrust of the book both interesting and convincing—I like the idea of a universe which makes up the rules as it goes along—I have a couple of critiques. The authors mention that “this book is not an essay in popular science” and emphasize that they have nowhere “deliberately compromised the formulation of the ideas to make them more accessible.” Which is fair enough—if somewhat in tension with the dust jacket’s claim that the argument is “readily accessible to non-scientists”—however, readability is not always inconsistent with accuracy, and the text in places (particularly the first part) seems unnecessarily convoluted and repetitive.

Also, while the authors touch on the “aesthetic element” in mathematics toward the end of the book, they downplay its role as a potent influencer of scientific theories. Why do scientists pursue invariant theories, symmetries (or supersymmetries), elegant mathematical equations and unified theories of everything, with a fervour that often overrides the need for experimental support? The powerful attraction such theories hold is not unrelated to the fact that—since the time of the ancient Greeks—they have been considered beautiful. They are the supermodels of science. This ideal of eternal beauty goes back to the Pythagoreans, who went so far as to associate stability and symmetry with good, and mutability and asymmetry with evil. Smolin describes current models as being aesthetically “preposterous” but, while that may be true of the implementation, the underlying ideas in traditional science, with their rotating spheres and musical strings, have always been the epitome of austere, timeless elegance. The impulse to preserve an aesthetically correct model by assuming that prediction errors are due to randomness in the inputs is common not just in multiverse theory, but wherever the mechanistic paradigm is applied to complex systems, from weather forecasting (ensemble predictions) to finance (risk analysis techniques based on equilibrium and efficiency).

Here I should declare an interest, having argued in my book Truth or Beauty: Science and the Quest for Order that modern physics, with its obsessive search for perfect symmetries, immutable laws and unified theories, is—like a modern harmony of the spheres—driven largely by this sense of aesthetics. The quest for beauty particularly affects the theories constructed about things that are difficult or impossible to empirically test, which includes the number of universes (who knows, perhaps our world is an app on some alien’s phone).

The difference between the universe portrayed in this book and something like multiverse theory reflects or could be phrased in terms of aesthetics—Unger and Smolin’s version more in tune with the Zen Buddhist aesthetic of wabi-sabi, where beauty is seen as imperfect, impermanent, incomplete and asymmetrical. Like the world it describes, if on a somewhat smaller scale, this book is a singular and impressive effort, and it should age well.

David Orrell is a writer and applied mathematician. His latest book is Truth or Beauty: Science and the Quest for Order (Oxford University Press, 2012).

Related Letters and Responses

Matthew Kleban New York City, New York

David Orrell

Advertisement

Advertisement