Friday, March 31, 2017

Book rant: “Universal” by Brian Cox and Jeff Forshaw

Universal: A Guide to the Cosmos
Brian Cox and Jeff Forshaw
Da Capo Press (March 28, 2017)
(UK Edition, Allen Lane (22 Sept. 2016))

I was meant to love this book.

In “Universal” Cox and Forshaw take on astrophysics and cosmology, but rather than using the well-trodden historic path, they offer do-it-yourself instructions.

The first chapters of the book start with every-day observations and simple calculations, by help of which the reader can estimate eg the radius of Earth and its mass, or – if you let a backyard telescope with a 300mm lens and equatorial mount count as every-day items – the distance to other planets in the solar system.

Then, the authors move on to distances beyond the solar system. With that, self-made observations understandably fade out, but are replaced with publicly available data. Cox and Forshaw continue to explain the “cosmic distance ladder,” variable stars, supernovae, redshift, solar emission spectra, Hubble’s law, the Herzsprung-Russell diagram.

Set apart from the main text, the book has “boxes” (actually pages printed white on black) with details of the example calculations and the science behind them. The first half of the book reads quickly and fluidly and reminds me in style of school textbooks: They make an effort to illuminate the logic of scientific reasoning, with some historical asides, and concrete numbers. Along the way, Cox and Forshaw emphasize that the great power of science lies in the consistency of its explanations, and they highlight the necessity of taking into account uncertainty both in the data and in the theories.

The only thing I found wanting in the first half of the book is that they use the speed of light without explaining why it’s constant or where to get it from, even though that too could have been done with every-day items. But then maybe that’s explained in their first book (which I haven’t read).

For me, the fascinating aspect of astrophysics and cosmology is that it connects the physics of the very small scales with that of the very large scales, and allows us to extrapolate both into the distant past and future of our universe. Even though I’m familiar with the research, it still amazes me just how much information about the universe we have been able to extract from the data in the last two decades.

So, yes, I was meant to love this book. I would have been an easy catch.

Then the book continues to explain the dark matter hypothesis as a settled fact, without so much as mentioning any shortcomings of LambdaCDM, and not a single word on modified gravity. The Bullet Cluster is, once again, used as a shut-up argument – a gross misrepresentation of the actual situation, which I previously complained about here.

Inflation gets the same treatment: It’s presented as if it’s a generally accepted model, with no discussion given to the problem of under-determination, or whether inflation actually solves problems that need a solution (or solves the problems period).

To round things off, the authors close the final chapter with some words on eternal inflation and bubble universes, making a vague reference to string theory (because that’s also got something to do with multiverses you see), and then they suggest this might mean we live in a computer simulation:

“Today, the cosmologists responsible for those simulations are hampered by insufficient computing power, which means that they can only produce a small number of simulations, each with different values for a few key parameters, like the amount of dark matter and the nature of the primordial perturbations delivered at the end of inflation. But imagine that there are super-cosmologists who know the String Theory that describes the inflationary Multiverse. Imagine that they run a simulation in their mighty computers – would the simulated creatures living within one of the simulated bubble universes be able to tell that they were in a simulation of cosmic proportions?”
Wow. After all the talk about how important it is to keep track of uncertainty in scientific reasoning, this idea is thrown at the reader with little more than a sentence which mentions that, btw, “evidence for inflation” is “not yet absolutely compelling” and there is “no firm evidence for the validity of String Theory or the Multiverse.” But, hey, maybe we live in a computer simulation, how cool is that?

Worse than demonstrating slippery logic, their careless portrayal of speculative hypotheses as almost settled is dumb. Most of the readers who buy the book will have heard of modified gravity as dark matter’s competitor, and will know the controversies around inflation, string theory, and the multiverse: It’s been all over the popular science news for several years. That Cox and Forshaw don’t give space to discussing the pros and cons in a manner that at least pretends to be objective will merely convince the scientifically-minded reader that the authors can’t be trusted.

The last time I thought of Brian Cox – before receiving the review copy of this book – it was because a colleague confided to me that his wife thinks Brian is sexy. I managed to maneuver around the obviously implied question, but I’ll answer this one straight: The book is distinctly unsexy. It’s not worthy of a scientist.

I might have been meant to love the book, but I ended up disappointed about what science communication has become.

[Disclaimer: Free review copy.]

Monday, March 27, 2017

Book review: “Anomaly!” by Tommaso Dorigo

Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab
Tommaso Dorigo
World Scientific Publishing Europe Ltd (November 17, 2016)

Tommaso Dorigo is a familiar name in the blogosphere. Over at “A Quantum’s Diary’s Survivor”, he reliably comments on everything going on in particle physics. Located in Venice, Tommaso is a member of the CMS collaboration at CERN and was part of the CDF collaboration at Tevatron – a US particle collider that ceased operation in 2011.

Anomaly! Is Tommaso’s first book and it chronicles his time in the CDF collaboration from the late 1980s until 2000. This covers the measurement of the mass of the Z-boson, the discovery of the top-quark and the – eventually unsuccessful – search for supersymmetric particles. In his book, Tommaso weaves together the scientific background about particle physics with brief stories of the people involved and their – often conflict-laden – discussions.

The first chapters of the book contain a brief summary of the standard model and quantum field theory and can be skipped by those familiar with these topics. The book is mostly self-contained in that Tommaso provides all the knowledge necessary to understand what’s going on (with a few omissions that I believe don’t matter much). But the pace is swift. I sincerely doubt a reader without background in particle physics will be able to get through the book without re-reading some passages many times.

It is worth emphasizing that Tommaso is an experimentalist. I think I hadn’t previously realized how much the popular science literature in particle physics has, so-far, been dominated by theorists. This makes Anomaly! a unique resource. Here, the reader can learn how particle physics is really done! From the various detectors and their designs, to parton distribution functions, to triggers and Monte Carlo simulations, Tommaso doesn’t shy away from going into all the details. At the same time, his anecdotes showcase how a large collaboration like CDF – with more than 500 members – work.

That having been said, the book is also somewhat odd in that it simply ends without summary, or conclusion, or outlook. Given that the events Tommaso writes about date back 30 years, I’d have been interested to hear whether something has changed since. Is the software development now better managed? Is there still so much competition between collaborations? Is the relation to the media still as fraught? I got the impression an editor pulled the manuscript out under Tommaso’s still typing fingers because no end was in sight 😉

Besides this, I have little to complain about. Tommaso’s writing style is clear and clean, and also in terms of structure – mostly chronological – nothing seems amiss. My major criticism is that the book doesn’t have any references, meaning the reader is stuck there without any guide for how to proceed in case he or she wants to find out more.

So should you, or should you not buy the book? If you’re considering to become a particle physicist, I strongly recommend you read this book to find out if you fit the bill. And if you’re a science writer who regularly reports on particle physics, I also recommend you read this book to get an idea of what’s really going on. All the rest of you I have to warn that while the book is packed with information, it’s for the lovers. It’s about how the author tracked down a factor of 1.25^2 to explain why his data analysis came up with 588 rather than 497 Z \to b\bar b decays. And you’re expected to understand why that’s exciting.

On a personal note, the book brought back a lot of memories. All the talk of Herwig and Pythia, of Bjorken-x, rapidity and pseudorapidity, missing transverse energy, the CTEQ tables, hadronization, lost log-files, missed back-ups, and various fudge-factors reminded me of my PhD thesis – and of all the reasons I decided that particle physics isn’t for me.

[Disclaimer: Free review copy.]

Wednesday, March 22, 2017

Academia is fucked-up. So why isn’t anyone doing something about it?

A week or so ago, a list of perverse incentives in academia made rounds. It offers examples like “rewarding an increased number of citations” that – instead of encouraging work of high quality and impact – results in inflated citation lists, an academic tit-for-tat which has become standard practice. Likewise, rewarding a high number of publications doesn’t produce more good science, but merely finer slices of the same science.

Perverse incentives in academia.
Source: Edwards and Roy (2017). Via.

It’s not like perverse incentives in academia is news. I wrote about this problem ten years ago, referring to it as the confusion of primary goals (good science) with secondary criteria (like, for example, the number of publications). I later learned that Steven Pinker made the same distinction for evolutionary goals, referring to it as ‘proximate’ vs ‘ultimate’ causes.

The difference can be illustrated in a simple diagram (see below). A primary goal is a local optimum in some fitness landscape – it’s where you want to go. A secondary criterion is the first approximation for the direction towards the local optimum. But once you’re on the way, higher-order corrections must be taken into account, otherwise the secondary criterion will miss the goal – often badly.


The number of publications, to come back to this example, is a good first-order approximation. Publications demonstrate that a scientist is alive and working, is able to think up and finish research projects, and – provided the paper are published in peer reviewed journals – that their research meets the quality standard of the field.

To second approximation, however, increasing the number of publications does not necessarily also lead to more good science. Two short papers don’t fit as much research as do two long ones. Thus, to second approximation we could take into account the length of papers. Then again, the length of a paper is only meaningful if it’s published in a journal that has a policy of cutting superfluous content. Hence, you have to further refine the measure. And so on.

This type of refinement isn’t specific to science. You can see in many other areas of our lives that, as time passes, the means to reach desired goals must be more carefully defined to make sure they still lead where we want to go.

Take sports as example. As new technologies arise, the Olympic committee has added many additional criteria on what shoes or clothes athletes are admitted to wear, which drugs make for an unfair advantage, and they’ve had to rethink what distinguishes a man from a woman.

Or tax laws. The Bible left it at “When the crop comes in, give a fifth of it to Pharaoh.” Today we have books full of ifs and thens and whatnots so incomprehensible I suspect it’s no coincidence suicide rates peak during tax season.

It’s debatable of course whether current tax laws indeed serve a desirable goal, but I don’t want to stray into politics. Relevant here is only the trend: Collective human behavior is difficult to organize, and it’s normal that secondary criteria to reach primary goals must be refined as time passes.

The need to quantify academic success is a recent development. It’s a consequence of changes in our societies, of globalization, increased mobility and connectivity, and is driven by the increased total number of people in academic research.

Academia has reached a size where accountability is both important and increasingly difficult. Unless you work in a tiny subfield, you almost certainly don’t know everyone in your community and can’t read every single publication. At the same time, people are more mobile than ever, and applying for positions has never been easier.

This means academics need ways to judge colleagues and their work quickly and accurately. It’s not optional – it’s necessary. Our society changes, and academia has to change with it. It’s either adapt or die.

But what has been academics’ reaction to this challenge?

The most prevalent reaction I witness is nostalgia: The wish to return to the good old times. Back then, you know, when everyone on the committee had the time to actually read all the application documents and was familiar with all the applicants’ work anyway. Back then when nobody asked us to explain the impact of our work and when we didn’t have to come up with 5-year plans. Back then, when they recommended that pregnant women smoke.

Well, there’s no going back in time, and I’m glad the past has passed. I therefore have little patience for such romantic talk: It’s not going to happen, period. Good measures for scientific success are necessary – there’s no way around it.

Another common reaction is the claim that quality isn’t measurable – more romantic nonsense. Everything is measurable, at least in principle. In practice, many things are difficult to measure. That’s exactly why measures have to be improved constantly.

Then, inevitably, someone will bring up Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” But that is clearly wrong. Sorry, Goodhard. If you want to indeed optimize the measure, you get exactly what you asked for. The problem is that often the measure wasn’t what you wanted to begin with.

With use of the terminology introduced above, Goodhard’s Law can be reformulated as: “When people optimize a secondary criterion, they will eventually reach a point where further optimization diverts from the main goal.” But our reaction to this should be to improve the measure, not throw the towel and complain “It’s not possible.”

This stubborn denial of reality, however, has an unfortunate consequence: Academia has gotten stuck with the simple-but-bad secondary criteria that are currently in use: number of publications, the infamous h-index, the journal impact factor, renown co-authors, positions held at prestigious places, and so on. 

We all know they’re bad measures. But we use them anyway because we simply don’t have anything better. If your director/dean/head/board is asked to demonstrate how great your place is, they’ll fall back on the familiar number of publications, and as a bonus point out who has recently published in Nature. I’ve seen it happen. I just had to fill in a form for the institute’s board in which I was asked for my h-index and my paper count.

Last week, someone asked me if I’d changed my mind in the ten years since I wrote about this problem first. Needless to say, I still think bad measures are bad for science. But I think that I was very, very naïve to believe just drawing attention to the problem would make any difference. Did I really think that scientists would see the risk to their discipline and do something about it? Apparently that’s exactly what I did believe.

Of course nothing like this happened. And it’s not just because I’m a nobody who nobody’s listening to. Similar concerns like mine have been raised with increasing frequency by more widely known people in more popular outlets, like Nature and Wired. But nothing’s changed.

The biggest obstacle to progress is that academics don’t want to admit the problem is of their own making. Instead, they blame others: policy makers, university administrators, funding agencies. But these merely use measures that academics themselves are using.

The result has been lots of talk and little action. But what we really need is a practical solution. And of course I have one on offer: An open-source software that allows every researcher to customize their own measure for what they think is “good science” based on the available data. That would include the number of publications and their citations. But there is much more information in the data which currently isn’t used.

You might want to know whether someone’s research connects areas that are only loosely connected. Or how many single-authored papers they have. You might want to know how well their keyword-cloud overlaps with that of your institute. You might want to develop a measure for how “deep” and “broad” someone’s research is – two terms that are often used in recommendation letters but that are extremely vague.

Such individualized measures wouldn’t only automatically update as people revise criteria, but they would also counteract the streamlining of global research and encourage local variety.

Why isn’t this happening? Well, besides me there’s no one to do it. And I have given up trying to get funding for interdisciplinary research. The inevitable response I get is that I’m not qualified. Of course it’s correct – I’m not qualified to code and design a user-interface. But I’m totally qualified to hire some people and kick their asses. Trust me, I have experience kicking ass. Price tag to save academia: An estimated 2 million Euro for 5 years.

What else has changed in the last ten years? I’ve found out that it’s possible to get paid for writing. My freelance work has been going well. The main obstacle I’ve faced is lack of time, not lack of opportunity. And so, when I look at academia now, I do it with one leg outside. What I see is that academia needs me more than I need academia.

The current incentives are extremely inefficient and waste a lot of money. But nothing is going to change until we admit that solving the problem is our own responsibility.

Maybe, when I write about this again, ten years from now, I’ll not refer to academics as “us” but as “they.”

Wednesday, March 15, 2017

No, we probably don’t live in a computer simulation

According to Nick Bostrom of the Future of Humanity Institute, it is likely that we live in a computer simulation. And one of our biggest existential risks is that the superintelligence running our simulation shuts it down.

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything - it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

First, to get it out of the way, there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation. It’s also a meaningless statement.

A stricter way to speak of the computational universe is to make more precise what is meant by ‘computing.’ You could say, for example, that the universe is made of bits and an algorithm encodes an ordered time-series which is executed on these bits. Good - but already we’re deep in the realm of physics.

If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. This might be somebody’s universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits. [Note added for clarity: You might be able to get quantum mechanics from a classical, nonlocal approach, but nobody knows how to get quantum field theory from that.]

Even from qubits, however, nobody’s been able to recover the presently accepted fundamental theories – general relativity and the standard model of particle physics. The best attempt to date is that by Xiao-Gang Wen and collaborators, but they are still far away from getting back general relativity. It’s not easy.

Indeed, there are good reasons to believe it’s not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity aren’t necessarily small and have been looked for – and nothing’s been found.

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation.

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

So, yes, I think artificial consciousness is on the horizon. I also think it’s possible to convince a mind with cognitive abilities comparable to that of humans that their environment is not what they believe it is. Easy enough to put the artificial brain in a metaphoric vat: If you don’t give it any input, it would never be any wiser. But that’s not the environment I experience and, if you read this, it’s not the environment you experience either. We have a lot of observations. And it’s not easy to consistently compute all the data we have.

Besides, if the reason you build an artificial intelligences is consultation, making them believe reality is not what it seems is about the last thing you’d want.

Hence, the first major problem with the simulation hypothesis is to consistently create all the data which we observe by any means other than the standard model and general relativity – because these are, for all we know, not compatible with the universe-as-a-computer.

Maybe you want to argue it is only you alone who is being simulated, and I am merely another part of the simulation. I’m quite sympathetic to this reincarnation of solipsism, for sometimes my best attempt of explaining the world is that it’s all an artifact of my subconscious nightmares. But the one-brain-only idea doesn’t work if you want to claim that it is likely we live in a computer simulation.

To claim it is likely we are simulated, the number of simulated conscious minds must vastly outnumber those of non-simulated minds. This means the programmer will have to create a lot of brains. Now, they could separately simulate all these brains and try to fake an environment with other brains for each, but that would be nonsensical. The computationally more efficient way to convince one brain that the other brains are “real” is to combine them in one simulation.

Then, however, you get simulated societies that, like ours, will set out to understand the laws that govern their environment to better use it. They will, in other words, do science. And now the programmer has a problem, because it must keep close track of exactly what all these artificial brains are trying to probe.

The programmer could of course just simulate the whole universe (or multiverse?) but that again doesn’t work for the simulation argument. Problem is, in this case it would have to be possible to encode a whole universe in part of another universe, and parts of the simulation would attempt to run their own simulation, and so on. This has the effect of attempting to reproduce the laws on shorter and shorter distance scales. That, too, isn’t compatible with what we know about the laws of nature. Sorry.

Stephen Wolfram (from Wolfram research) recently told John Horgan that:
    “[Maybe] down at the Planck scale we’d find a whole civilization that’s setting things up so our universe works the way it does.”

I cried a few tears over this.

The idea that the universe is self-similar and repeats on small scales – so that elementary particles are built of universes which again contain atoms and so on – seems to hold a great appeal for many. It’s another one of these nice ideas that work badly. Nobody’s ever been able to write down a consistent theory that achieves this – consistent both internally and with our observations. The best attempt I know of are limit cycles in theory space but to my knowledge that too doesn’t really work.

Again, however, the details don’t matter all that much – just take my word for it: It’s not easy to find a consistent theory for universes within atoms. What matters is the stunning display of ignorance – for not to mention arrogance –, demonstrated by the belief that for physics at the Planck scale anything goes. Hey, maybe there’s civilizations down there. Let’s make a TED talk about it next. For someone who, like me, actually works on Planck scale physics, this is pretty painful.

To be fair, in the interview, Wolfram also explains that he doesn’t believe in the simulation hypothesis, in the sense that there’s no programmer and no superior intelligence laughing at our attempts to pin down evidence for their existence. I get the impression he just likes the idea that the universe is a computer. (Note added: As a commenter points out, he likes the idea that the universe can be described as a computer.)

In summary, it isn’t easy to develop theories that explain the universe as we see it. Our presently best theories are the standard model and general relativity, and whatever other explanation you have for our observations must first be able to reproduce these theories’ achievements. “The programmer did it” isn’t science. It’s not even pseudoscience. It’s just words.

All this talk about how we might be living in a computer simulation pisses me off not because I’m afraid people will actually believe it. No, I think most people are much smarter than many self-declared intellectuals like to admit. Most readers will instead correctly conclude that today’s intelligencia is full of shit. And I can’t even blame them for it.

Saturday, March 11, 2017

Is Verlinde’s Emergent Gravity compatible with General Relativity?

Dark matter filaments, Millenium Simulation
Image: Volker Springel
A few months ago, Erik Verlinde published an update of his 2010 idea that gravity might originate in the entropy of so-far undetected microscopic constituents of space-time. Gravity, then, would not be fundamental but emergent.

With the new formalism, he derived an equation for a modified gravitational law that, on galactic scales, results in an effect similar to dark matter.

Verlinde’s emergent gravity builds on the idea that gravity can be reformulated as a thermodynamic theory, that is as if it was caused by the dynamics of a large number of small entities whose exact identity is unknown and also unnecessary to describe their bulk behavior.

If one wants to get back usual general relativity from the thermodynamic approach, one uses an entropy that scales with the surface area of a volume. Verlinde postulates there is another contribution to the entropy which scales with the volume itself. It’s this additional entropy that causes the deviations from general relativity.

However, in the vicinity of matter the volume-scaling entropy decreases until it’s entirely gone. Then, one is left with only the area-scaling part and gets normal general relativity. That’s why on scales where the average density is high – high compared to galaxies or galaxy clusters – the equation which Verlinde derives doesn’t apply. This would be the case, for example, near stars.

The idea quickly attracted attention in the astrophysics community, where a number of papers have since appeared which confront said equation with data. Not all of these papers are correct. Two of them seemed to have missed entirely that the equation which they are using doesn’t apply on solar-system scales. Of the remaining papers, three are fairly neutral in the conclusions, while one – by Lelli et al – is critical. The authors find that Verlinde’s equation – which assumes spherical symmetry – is a worse fit to the data than particle dark matter.

There has not, however, so far been much response from theoretical physicists. I’m not sure why that is. I spoke with science writer Anil Ananthaswamy some weeks ago and he told me he didn’t have an easy time finding a theorist willing to do as much as comment on Verlinde’s paper. In a recent Nautilus article, Anil speculates on why that might be:
“A handful of theorists that I contacted declined to comment, saying they hadn’t read the paper; in physics, this silent treatment can sometimes be a polite way to reject an idea, although, in fairness, Verlinde’s paper is not an easy read even for physicists.”
Verlinde’s paper is indeed not an easy read. I spent some time trying to make sense of it and originally didn’t get very far. The whole framework that he uses – dealing with an elastic medium and a strain-tensor and all that – isn’t only unfamiliar but also doesn’t fit together with general relativity.

The basic tenet of general relativity is coordinate invariance, and it’s absolutely not clear how it’s respected in Verlinde’s framework. So, I tried to see whether there is a way to make Verlinde’s approach generally covariant. The answer is yes, it’s possible. And it actually works better than I expected. I’ve written up my findings in a paper which just appeared on the arxiv:


It took some trying around, but I finally managed to guess a covariant Lagrangian that would produce the equations in Verlinde’s paper when one makes the same approximations. Without these approximations, the equations are fully compatible with general relativity. They are however – as so often in general relativity – hideously difficult to solve.

Making some simplifying assumptions allows one to at least find an approximate solution. It turns out however, that even if one makes the same approximations as in Verlinde’s paper, the equation one obtains is not exactly the same that he has – it has an additional integration constant.

My first impulse was to set that constant to zero, but upon closer inspection that didn’t make sense: The constant has to be determined by a boundary condition that ensures the gravitational field of a galaxy (or galaxy cluster) asymptotes to Friedmann-Robertson-Walker space filled with normal matter and a cosmological constant. Unfortunately, I haven’t been able to find the solution that one should get in the asymptotic limit, hence wasn’t able to fix the integration constant.

This means, importantly, that the data fits which assume the additional constant is zero do not actually constrain Verlinde’s model.

With the Lagrangian approach that I have tried, the interpretation of Verlinde’s model is very different – I dare to say far less outlandish. There’s an additional vector-field which permeates space-time and which interacts with normal matter. It’s a strange vector field both because it’s not – as the other vector-fields we know of – a gauge-boson, and has a different kinetic energy term. In addition, the kinetic term also appears in a way one doesn’t commonly have in particle physics but instead in condensed matter physics.

Interestingly, if you look at what this field would do if there was no other matter, it would behave exactly like a cosmological constant.

This, however, isn’t to say I’m sold on the idea. What I am missing is, most importantly, some clue that would tell me the additional field actually behaves like matter on cosmological scales, or at least sufficiently similar to reproduce other observables, like eg baryon acoustic oscillation. This should be possible to find out with the equations in my paper – if one manages to actually solve them.

Finding solutions to Einstein’s field equations is a specialized discipline and I’m not familiar with all the relevant techniques. I will admit that my primary method of solving the equations – to the big frustration of my reviewers – is to guess solutions. It works until it doesn’t. In the case of Friedmann-Robertson-Walker with two coupled fluids, one of which is the new vector field, it hasn’t worked. At least not so far. But the equations are in the paper and maybe someone else will be able to find a solution.

In summary, Verlinde’s emergent gravity has withstood the first-line bullshit test. Yes, it’s compatible with general relativity.

Thursday, March 02, 2017

Yes, a violation of energy conservation can explain the cosmological constant

Chad Orzel recently pointed me towards an article in Physics World according to which “Dark energy emerges when energy conservation is violated.” Quoted in the Physics World article are George Ellis, who enthusiastically notes that the idea is “no more fanciful than many other ideas being explored in theoretical physics at present,” and Lee Smolin, according to whom it’s “speculative, but in the best way.” Chad clearly found this somewhat too polite to be convincing and asked me for some open words:



I had seen the headline flashing by earlier but ignored it because – forgive me – it’s obvious energy non-conservation can mimic a cosmological constant.

Reason is that usually, in General Relativity, the expansion of space-time is described by two equations, known as the Friedmann-equations. They relate the velocity and acceleration of the universe’s normalized distance measures – called the ‘scale factor’ – with the average energy density and pressure of matter and radiation in the universe. If you put in energy-density and pressure, you can calculate how the universe expands. That, basically, is what cosmologists do for a living.

The two Friedmann-equations, however, are not independent of each other because General Relativity presumes that the various forms of energy-densities are locally conserved. That means if you take only the first Friedmann-equation and use energy-conservation, you get the second Friedmann-equation, which contains the cosmological constant. If you turn this statement around it means that if you throw out energy conservation, you can produce an accelerated expansion.

It’s an idea I’ve toyed with years ago, but it’s not a particularly appealing solution to the cosmological constant problem. The issue is you can’t just selectively throw out some equations from a theory because you don’t like them. You have to make everything work in a mathematically consistent way. In particular, it doesn’t make sense to throw out local energy-conservation if you used this assumption to derive the theory to begin with.

Upon closer inspection, the Physics World piece summarizes the paper:
which got published in PRL a few weeks ago, but has been on the arxiv for almost a year. Indeed, when I looked at it, I recalled I had read the paper and found it very interesting. I didn’t write about it here because the point they make is quite technical. But since Chad asked, here we go.

Modifying General Relativity is chronically hard because the derivation of the theory is so straight-forward that much violence is needed to avoid Einstein’s Field Equations. It took Einstein a decade to get the equations right, but if you know your differential geometry it’s a three-liner really. This isn’t to belittle Einstein’s achievement – the mathematical apparatus wasn’t then fully developed and he was guessing its way around underived theorems – but merely to emphasize that General Relativity is easy to get but hard to amend.

One of the few known ways to consistently amend General Relativity is ‘unimodular gravity,’ which works as follows.

In General Relativity the central dynamical quantity is the metric tensor (or just “metric”) which you need to measure the ratio of distances relative to each other. From the metric tensor and its first and second derivative you can calculate the curvature of space-time.

General Relativity can be derived from an optimization principle by asking: “From all the possible metrics, which is the one that minimizes curvature given certain sources of energy?” This leads you to Einstein’s Field Equations. In unimodular gravity in contrast, you don’t look at all possible metrics but only those with a fixed metric determinant, which means you don’t allow a rescaling of volumes. (A very readable introduction to unimodular gravity by George Ellis can be found here.)

Unimodular gravity does not result in Einstein’s Field Equations, but only in a reduced version thereof because the variation of the metric is limited. The result is that in unimodular gravity, energy is not automatically locally conserved. Because of the limited variation of the metric that is allowed in unimodular gravity, the theory has fewer symmetries. And, as Emmy Noether taught us, symmetries give rise to conservation laws. Therefore, unimodular gravity has fewer conservation laws.

I must emphasize that this is not the ‘usual’ non-conservation of total energy one already has in General Relativity, but a new violation of local energy-densities does that not occur in General Relativity.

If, however, you then add energy-conservation to unimodular gravity, you get back Einstein’s field equations, though this re-derivation comes with a twist: The cosmological constant now appears as an integration constant. For some people this solves a problem, but personally I don’t see what difference it makes just where the constant comes from – its value is unexplained either way. Therefore, I’ve never found unimodular gravity particularly interesting, thinking, if you get back General Relativity you could as well have used General Relativity to begin with.

But in the new paper the authors correctly point out that you don’t necessarily have to add energy conservation to the equations you get in unimodular gravity. And if you don’t, you don’t get back general relativity, but a modification of general relativity in which energy conservation is violated – in a mathematically consistent way.

Now, the authors don’t look at all allowed violations of energy-conservation in their paper and I think smartly so, because most of them will probably result in a complete mess, by which I mean be crudely in conflict with observation. They instead look at a particularly simple type of energy conservation and show that this effectively mimics a cosmological constant.

They then argue that on the average such a type of energy-violation might arise from certain quantum gravitational effects, which is not entirely implausible. If space-time isn’t fundamental, but is an emergent description that arises from an underlying discrete structure, it isn’t a priori obvious what happens to conservation laws.

The framework proposed in the new paper, therefore, could be useful to quantify the observable effects that arise from this. To demonstrate this, the authors look at the example of 1) diffusion from causal sets and 2) spontaneous collapse models in quantum mechanics. In both cases, they show, one can use the general description derived in the paper to find constraints on the parameters in this model. I find this very useful because it is a simple, new way to test approaches to quantum gravity using cosmological data.

Of course this leaves many open questions. Most importantly, while the authors offer some general arguments for why such violations of energy conservation would be too small to be noticeable in any other way than from the accelerated expansion of the universe, they have no actual proof for this. In addition, they have only looked at this modification from the side of General Relativity, but I would like to also know what happens to Quantum Field Theory when waving good-bye to energy conservation. We want to make sure this doesn’t ruin the standard model’s fit of any high-precision data. Also, their predictions crucially depend on their assumption about when energy violation begins, which strikes me as quite arbitrary and lacking a physical motivation.

In summary, I think it’s a so-far very theoretical but also interesting idea. I don’t even find it all that speculative. It is also clear, however, that it will require much more work to convince anybody this doesn’t lead to conflicts with observation.