Thursday, December 27, 2018

How the LHC may spell the end of particle physics

The Large Hadron Collider (LHC) recently completed its second experimental run. It now undergoes a scheduled upgrade to somewhat higher energies, at which more data will be collected. Besides the Higgs-boson, the LHC has not found any new elementary particle.

It is possible that in the data yet to come some new particle eventually shows up. But particle physicists are nervous. It’s not looking good – besides a few anomalies that are not statistically significant, there is no evidence for anything out of the normal. And if the LHC finds nothing new, there is no reason to think the next larger collider will. In which case, why build one?

That the LHC finds the Higgs and nothing else was dubbed the “nightmare scenario” for a reason. For 30 years, particle physicists have told us that the LHC should find something besides that, something exciting: a particle for dark matter, additional dimensions of space, or maybe a new type of symmetry. Something that would prove that the standard model is not all there is. But this didn’t happen.

All those predictions for new physics were based on arguments from naturalness. I explained in my book that naturalness arguments are not mathematically sound and one shouldn’t have trusted them.

The problem particle physicists now have is that naturalness was the only reason to think that there should be new physics at the LHC. That’s why they are getting nervous. Without naturalness, there is no argument for new physics at energies even higher than that of the LHC. (Not until 15 orders of magnitude higher, which is when the quantum structure of spacetime should become noticeable. But energies so large will remain inaccessible for the foreseeable future.)

How have particle physicists reacted to the situation? Largely by pretending nothing happened.

One half continues to hope that something will show up in the data, eventually. Maybe naturalness is just more complicated than we thought. The other half pre-emptively fabricates arguments for why a next larger collider should see new particles. And a few just haven’t noticed they walked past the edge of the cliff. A recent report about Beyond the Standard Model Physics at the LHC, for example, still iterates that “naturalness [is] the main motivation to expect new physics.”

Regardless of their coping strategy, a lot of particle physicists probably now wish they had never made those predictions. Therefore I think it’s a great time to look at who said what. References below.

Some lingo ahead: “eV” stands for electron-Volt and is a measure of energy. Particle colliders are classified by the energy that they can test. Higher energy means that the collisions resolve smaller structures. The LHC will reach up to 14 Tera electron Volt (TeV). The “electroweak scale” or “electroweak energy” is typically said to be around the mass of the Z-boson, which is about 100 Giga-electron Volts (GeV), ie a factor 100 below what the LHC reaches.

Also note that even though the LHC reaches energies up to 14 TeV, it collides protons, and those are not elementary particles but composites of quarks and gluons. The total collision energy is therefore distributed over the constituent particles, meaning that constraints on the masses of new particles are below the collision energy. How good the constraints are depends on the expected number of interactions and the amount of data collected. The current constraints are typically at some TeV and will increase as more data is analyzed.

With that ahead, let us start in 1987 with Barbieri and Giudice:
“The implementation of this “naturalness” criterion, gives rise to a physical upper bound on superparticle masses in the TeV range.”
In 1994, Anderson and Castano write:
“[In] the most natural scenarios, many sparticles, for example, charginos, squarks, and gluinos, lie within the physics reach of either LEP II or the Tevatron”
and
“supersymmetry cannot provide a complete explanation of weak scale stability, if squarks and gluinos have masses beyond the physics reach of the LHC.”
LEP was the Large Electron Positron collider. LEP1 and LEP2 refers to the two runs of the experiment.

In 1995, Dimopoulous and Giudice tell us similarly:
“[If] minimal low-energy supersymmetry describes the world with no more than 10% fine tuning, then LEP2 has great chances to discover it.”
In 1997, Erich Poppitz writes:
“Within the next 10 years—with the advent of the Large Hadron Collider—we will have the answer to the question: “Is supersymmetry relevant for physics at the electroweak scale?””
On to 1998, when Louis, Brunner, and Huber tell us the same thing:
“These models do provide a solution to the naturalness problem as long as the supersymmetric partners have masses not much bigger than 1 TeV.”
It was supposed to be an easy discovery, as Frank Paige wrote in 1998:
“Discovering gluinos and squarks in the expected mass range [...] seems straightforward, since the rates are large and the signals are easy to separate from Standard Model backgrounds.”
Giudice and Rattazzi in 1998 emphasize that naturalness is why they believe in physics beyond the standard model:
“The naturalness (or hierarchy) problem, is considered to be the most serious theoretical argument against the validity of the Standard Model (SM) of elementary particle interactions beyond the TeV energy scale. In this respect, it can be viewed as the ultimate motivation for pushing the experimental research to higher energies.”
They go on to praise the beauty of supersymmetry: “An elegant solution to the naturalness problem is provided by supersymmetry...”

In 1999, Alessandro Strumia, interestingly enough, concludes that the LEP results are really bad news for supersymmetry:
“the negative results of the recent searches for supersymmetric particles pose a naturalness problem to all ‘conventional’ supersymmetric models.”
In his paper, he stresses repeatedly that his conclusion applies only to certain supersymmetric models. Which is of course correct. The beauty of supersymmetry is that it’s so adaptive it evades all constraints.

Most particle physicists were utterly undeterred by the negative LEP results. They just moved their predictions to the next larger collider, the TeVatron and then the LHC.

In 2000, Feng, Matchev, and Moroi write:
“This has reinforced a widespread optimism that the next round of collider experiments at the Tevatron, LHC or the NLC are guaranteed to discover all superpartners, if they exists.”
(NLC stands for Next Linear Collider, which was a proposal in early 2000s that has since been dropped.) They also iterate that supersymmetry should be easy to find at the LHC:
“In contrast to the sfermions, gauginos and higgsinos cannot be very heavy in this scenario. For example … gauginos will be produced in large numbers at the LHC, and will be discovered in typical scenarios.”
In 2004, Stuart Raby tries to say that naturalness arguments already are in trouble:
“Simple ‘naturalness’ arguments would lead one to believe that SUSY should have been observed already.”
But of course that’s just reason to consider not-so-simple naturalness arguments.

In the same year, Fabiola Gianotti bangs the drum for the LHC (emphasis mine):
“The above [naturalness] arguments open the door to new and more fundamental physics. There are today several candidate scenarios for physics beyond the Standard Model, including Supersymmetry (SUSY), Technicolour and theories with Extra-dimensions. All of them predict new particles in the TeV region, as needed to stabilize the Higgs mass. We note that there is no other scale in particle physics today as compelling as the TeV scale, which strongly motivates a machine like the LHC able to explore directly and in detail this energy range.”
She praises supersymmetry as “very attractive” and also tells us that the discovery should be easy and fast:
“SUSY discovery at the LHC could be relatively easy and fast… Squark and gluino masses of 1 TeV are accessible after only one month of data taking… The ultimate mass reach is up to ∼ 3 TeV for squarks and gluinos. Therefore, if nothing is found at the LHC, TeV-scale Supersymmetry will most likely be ruled out, because of the arguments related to stabilizing the Higgs mass mentioned above.”
In 2005, Arkani-Hamed and Savas Dimopolous have the same tale to tell:
“[Ever] since the mid 1970’s, there has been a widely held expectation that the SM must be incomplete already at the ∼ TeV scale. The reason is the principle of naturalness… Solving the naturalness problem has provided the biggest impetus to constructing theories of physics beyond the Standard Model...”
Same thing with Feng and Wilczek in 2005:
“The standard model of particle physics is fine-tuned… This blemish has been a prime motivation for proposing supersymmetric extensions to the standard model. In models with low-energy supersymmetry, naturalness can be restored by having superpartners with approximately weak-scale masses.”
Here is John Donoghue in 2007:
“[The] argument against finetuning becomes a powerful motivator for new physics at the scale of 1 TeV. The Large Hadron Collider has been designed to find this new physics.”
Michael Dine who, also in 2007, writes:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
And Howard Baer in 2009:
“quadratic divergences associated with the scalar sector require new physics at or around the electroweak scale.”
The same story, that new physics needs to appear at around a TeV, has been repeated in countless talks and seminars. A few examples. Here is Peter Krieger in 2008:


Michelangelo Mangano:



Joseph Lykken:



I could go on, but I hope this suffices to document that pretty much everyone

    (a) agreed that the LHC should see new physics besides the Higgs, and
    (b) they all had the same reason, namely naturalness.

In summary: Since the naturalness-based predictions did not pan out, we have no reason to think that the remaining LHC run or an even larger particle collider would see any new physics that is not already explained by the standard model of particle physics. A larger collider would be able to measure more precisely the properties of already known particles, but that is arguably not a terribly exciting exercise. It will be a tough sell for a machine that comes at $10 billion and up. Therefore, it may very well be that the LHC will remain the largest particle collider in human history.



Bonus: A reader submits this gem from David Gross and Ed Witten in the Wall Street Journal, anno 1996:
“There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade. The existing accelerators that have a chance of doing so are the proton collider at the Department of Energy’s Fermi Lab in Batavia, Ill., and the electron collider at the European Center for Nuclear Research (CERN) in Geneva. Last year’s final run at Fermi Lab, during which the top quark was discovered, gave tantalizing hints of supersymmetry.”

Wednesday, December 26, 2018

Book review: “On The Future” by Martin Rees

On the Future: Prospects for Humanity
By Martin Rees
Princeton University Press (October 16, 2018)

The future will come, that much is clear. What it will bring, not so much. But speculating about what the future brings is how we make decisions in the present, so it’s a worthwhile exercise. It can be fun, it can be depressing. Rees’ new book “On The Future” is both.

Martin Rees is a cosmologist and astrophysicist. He has also for long been involved in public discourse about science, notably the difficulty of integrating scientific evidence in policy making. He is also one of the founding members of the Cambridge Center for Existential Risk and serves on the advisory board of the Future of Life Institute. In brief, Rees thinks ahead, not for 5 years or 10 years, but for 1000 or maybe – gasp – a million years.

In his new book, Rees covers a large number of topics. From the threat of nuclear war, climate change, clean energy, and environmental sustainability to artificial intelligence, bioterrorism, assisted dying, and the search for extraterrestrial life. Rees is clearly a big fan of space exploration and bemoans that today it’s nowhere near as exciting as when he was young.
“I recall a visit to my home town by John Glenn, the first American to go into orbit. He was asked what he was thinking while in the rocket’s nose cone, awaiting launch. He responded, ‘I was thinking that there were twenty thousand parts in this rocket, and each was made by the lowest bidder.’”
I much enjoyed Rees book, the biggest virtue of which is brevity. Rees gets straight to the point. He summarizes what we know and don’t know, and what he thinks about where it’ll go, and that’s that. The amount of flowery words in his book is minimal (and those that he uses are mostly borrowed from Carl Sagan).

You don’t have to agree with Rees on his extrapolations into the unknown, but you will end up being well-informed. Rees is also utterly unapologetic about being a scientist to the core. Oftentimes scientists writing about climate change or biotech end up in a forward-defense against denialism, which I find exceedingly tiresome. Rees does nothing of that sort. He sticks with the facts.

It sometimes shows that Rees is a physicist. For example in his going on about exoplanets, reductionism (“The ‘ordering’ of the sciences in this hierarchy is not controversial.”), and his plug for the multiverse about which he writes “[The multiverse] is not metaphysics. It’s highly speculative. But it’s exciting science. And it may be true.”

But Rees does not address the biggest challenge we currently face, that is our inability to make use of the knowledge we already have. He is simply silent on the problems we currently see in science, the lack of progress, and the difficulties we face in our society when trying to aggregate evidence to make informed decisions.

In his chapter about “The Limits and Future of Science,” Rees acknowledges the possibility that “some fundamental truths about nature could be too complex for unaided human brains to fully grasp” but fails to notice that unaided human brains are not even able to fully grasp how being part of a large community influences their interests – and with that the decision of what we chose to spend time and resources on.

By omitting to even mention these problems, Rees tells us something about the future too. We may be so busy painting pictures of our destination that we forget to think of a way to reach it.

[Disclaimer: Free review copy.]

Monday, December 24, 2018

Happy Holidays

I have been getting a novel complaint about my music videos, which is that they are “hard to understand.” In case you share this impression, you may be overthinking this. These aren’t press-releases about high-temperature superconductors or neutron star matter, it’s me attempting to sing. But since you ask, this one is about the – often awkward – relation between science and religion.



(Soundcloud version here.)

And since tis the season, allow me to mention that you find a donate-button in the top right corner of this website. On this occasion I also wish to express a heart-felt THANK YOU for all of those who this year sent donations. I very much appreciate your support, no matter how small, because it documents that you value my writing.

And here is this years’ family portrait for the Christmas cards.



In Germany, we traditionally celebrate Christmas on the evening of December 24th. So I herewith sign off for the rest of the year. Wish you all Happy Holidays.

Friday, December 21, 2018

Winter Solstice

[Photo: Herrmann Stamm]

The clock says 3:30 am. Is that early or late? Wrapped in a blanket I go into the living room. I open the door and step onto the patio. It’s too warm for December. An almost full moon blurs into the clouds. In the distance, the highway hums.

Somewhere, someone dies.

For everyone who dies, two people are born. 7.5 billion and counting.

We came to dominate planet Earth because, compared to other animals, we learned fast and collaborated well. We used resources efficiently. We developed tools to use more resources, and then employed those tools to use even more resources. But no longer. It’s 2018, and we are failing.

That’s what I think every day when I read the news. We are failing.

Throughout history, humans improved how to exchange and act on information held by only a few. Speech, writing, politics, economics, social and cultural norms, TV, telephones, the internet. These are all methods of communication. It’s what enabled us to collectively learn and make continuous progress. But now that we have networks connecting billions of people, we have reached our limits.

Fake news, Russian trolls, shame storms. Some dude’s dick in the wrong place. That’s what we talk about.

And buried below the viral videos and memes there’s the information that was not where it was supposed to be. Hurricane Katrina? The problem was known. The 2008 financial crisis? The problem was known. That Icelandic volcano whose ashes, in 2010, grounded flight traffic? Utterly unsurprising. Iceland has active volcanoes. Sometimes the wind blows South-East. Btw, it will happen again. And California is due for a tsunami. The problems are known.

But that’s not how it will end.

20 years ago I had a car accident. I was on a busy freeway. It was raining heavily and the driver in front of me suddenly braked. Only later did I learn someone had cut his way. I hit the brakes. And then I watched a pair of red lights coming closer.

They say time slows if you fear for your life. It does.

I came to a stop one inch before slamming into the other car. I breathed out. Then a heavy BMW slammed into my back.

Human civilization will go like that. If we don’t keep moving, problems now behind us will slam into our back. Climate change, environmental pollution, antibiotic resistance, the persistent risk of nuclear war, for just to mention a few – you know the list. We will have to deal with those sooner or later. Not now. Oh, no. Not us, not now, not here. But our children. Or their children. If we stop learning, if we stop improving our technologies, it’ll catch up with them, sooner or later.

Having to deal with long-postponed problems will eat up resources. Those resources, then, will not be available for further technological development, which will create further problems, which will eat up more resources. Modern technologies will become increasingly expensive until most people no longer can afford them. Infrastructures will crumble. Education will decay. It’s a downward spiral. A long, unpreventable and disease-ridden, regress.

Those artificial intelligences you were banking on? Not going to happen. All the money in the world will not lead to scientific breakthroughs if we don’t have sufficiently many people with the sufficient education.

Who is to blame? No one, really. We are just too stupid to organize our living together on a global scale. We will not make it to the next level of evolutionary development. We don’t have the mental faculties. We do not comprehend. We do not act because we cannot. We don’t know how. We will fail and, maybe, in a million years or so, another species will try again.

Climate negotiations stalled over the choice of a word. A single word.

The clouds have drifted and the bushes now throw faint shadows in the moonlight. A cat screeches, or maybe it’s two. Something topples over. An engine starts. Then, silence again.

In the silence, I can hear them scream. All the people who don’t get heard, who pray and hope and wait for someone to please do something. But there is no one to listen. Even the scientists, even people in my own community, do not see, do not want to see, are not willing to look at their failure to make informed decisions in large groups. The problems are known.

Back there on that freeway, the BMW totaled my little Ford. I carried away neck and teeth damage, though I wouldn’t realize this until months later. I got out of my car and stood in the rain, thinking I’d be late for class. Again. The passenger’s door of the BMW opened and out came – an umbrella. Then, a tall man in a dark suit. He looked at me and the miserable state of my car and handed me a business card. “Don’t worry,” he said, “My insurance will cover that.” It did.

Of course I’m as stupid as everyone else, screaming screams that no one hears and, despite all odds, still hoping that someone has an insurance, that someone knows what to do.

I go back into the house. It’s dark inside. I step onto a LEGO, one of the pink ones. They have fewer sharp edges; maybe, I think, that’s why parents keep buying them.

The kids are sleeping. It will be some hours until the husband announces his impending awakening with a morning fart. By standby lights I navigate to my desk.

We are failing. I am failing. But what else can I do than try.

I open my laptop.

Friday, December 14, 2018

Don’t ask what science can do for you.

Among the more peculiar side-effects of publishing a book are the many people who suddenly recall we once met.

There are weird fellows who write to say they mulled ten years over a single sentence I once spoke with them. There are awkward close-encounters from conferences I’d rather have forgotten about. There are people who I have either indeed forgotten about or didn’t actually meet. And then there are those who, at some time in my life, handed me a piece of the puzzle I’ve since tried to assemble; people I am sorry I forgot about.

For example my high-school physics teacher, who read about me in a newspaper and then came to a panel discussion I took part in. Or Eric Weinstein, who I met many years ago at Perimeter Institute, and who has since become the unofficial leader of the last American intellectuals. Or Robin Hanson, with whom I had a run-in 10 years ago and later met at SciFoo.

I spoke with Robin the other day.

Robin is an economist at George Mason University in Virginia, USA. I had an argument with him because Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia.

I wasn’t fond of Robin’s idea back then. The major reason was that I couldn’t see scientists spend much time on a betting market. Sure, some of them would give it a go, but nowhere near enough for such a market to have much impact.

Economists tend to find it hard to grasp, but most people who stay in academia are not in for the money. This isn’t to say that money is not relevant in academia – it certainly is: Money decides who stays and who goes and what research gets done. But if getting rich is your main goal, you don’t dedicate your life to counting how many strings fit into a proton.

The foundations of physics may be an extreme case, but by my personal assessment most people in this area primarily chase after recognition. They want to be important more than they want to be rich.

And even if my assessment of scientists’ motivations was wrong, such a betting market would have to have a lot of money go around, more money than scientists can make by upping their reputation with putting money behind their own predictions.

In my book, I name a few examples of physicists who bet to express confidence in their own theory, such as Garrett Lisi who bet Frank Wilczek $1000 that supersymmetry would not be found at the LHC by 2016. Lisi won and Wilczek paid his due. But really what Garrett did there was just to publicly promote his own theory, a competitor of supersymmetry.

A betting market with minor payoffs, one has to be afraid, would likewise simply be used by researchers to bet on themselves because they have more to win by securing grants or jobs, which favorable market odds might facilitate.

But what if scientists could make larger gains by betting smartly than they could make by promoting their own research? “Who would bet against their career?” I asked Robin when we spoke last week.

“You did,” he pointed out.

He got me there.

My best shot at a permanent position in academia would have been LHC predictions for physics beyond the standard model. This is what I did for my PhD. In 2003, I was all set to continue into this direction. But by 2005, three years before the LHC began operation, I became convinced that those predictions were all nonsense. I stopped working on the topic, and instead began writing about the problems with particle physics. In 2015, my agent sold the proposal for “Lost in Math”.

When I wrote the book proposal, no one knew what the LHC would discover. Had the experiments found any of the predicted particles, I’d have made myself the laughing stock of particle physics.

So, Robin is right. It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules.

In other words, yeah, maybe a betting market would be a good idea. Snort.

My thoughts have moved on since 2007, so have Robin’s. During our conversation, it became clear our views about what’s wrong with academia and what to do about it have converged over the years. To begin with, Robin seems to have recognized that scientists themselves are indeed unlikely candidates to do the betting. Instead, he now envisions that higher education institutions and funding agencies employ dedicated personnel to gather information and place bets. Let me call those “prediction market investors” (PMIs). Think of them like hedge-fund managers on the stock market.

Importantly, those PMIs would not merely collect information from scientists in academia, but also from those who leave. That’s important because information leaves with people. I suspect had you asked those who left particle physics about the LHC predictions, you’d have noticed quickly I was far from the only one who saw a problem. Alas, journalists don’t interview drop-outs. And those who still work in the field have all reason to project excitement and optimism about their research area.

The PMIs would of course not be the only ones making investments. Anyone could do it, if they wanted to. But I am guessing they’d be the biggest players.

This arrangement makes a lot of sense to me.

First and foremost, it’s structurally consistent. The people who evaluate information about the system do not themselves publish research papers. This circumvents the problem that I have long been going on about, that scientists don’t take into account the biases that skew their information-assessment. In Robin’s new setting, it doesn’t really matter if scientists’ see their mistakes; it only matters that someone sees them.

Second, it makes financial sense. Higher education institutions and funding agencies have reason to pay attention to the prediction market, because it provides new means to bring in money and new information about how to best invest money. In contrast to scientists, they might therefore be willing to engage in it.

Third, it is minimally intrusive yet maximally effective. It keeps the current arrangement of academia intact, but at the same it has a large potential for impact. Resistance to this idea would likely be small.

So, I quite like Robin’s proposal. Though, I wish to complain, it’s too vague to be practical and needs more work. It’s very, erm, academic.

But in 2007, I had another reason to disagree with Robin, which was that I thought his attempt to “save science” was unnecessary.

This was two years after Ioannidis’ paper “Why most published research findings are false” attracted a lot of attention. It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way.

But years passed and nothing changed. If anything, problems got worse as scientists began to more aggressively market their research and lobby for themselves. The quest for truth, it seems, is now secondary. More important is you can sell an idea, both to your colleagues and to the public. And if it doesn’t pan out? Deny, deflect, dissociate.

That’s why you constantly see bombastic headlines about breakthrough insights you never hear of again. That’s why, after years of talking about the wonderful things the LHC might see, no one wants to admit something went wrong. And that’s why, if you read the comments on this blog, they wish I’d keep my mouth shut. Because it’s cozy in their research bubble and they don’t want it to burst.

That’s also why Robin’s proposal looks good to me. It looks better the more I think about it. Three days have passed, and now I think it’s brilliant. Funding agencies would make much better financial investments if they’d draw on information from such a prediction market. Unfortunately, without startup support it’s not going to happen. And who will pay for it?

This brings me back to my book. Seeing the utter lack of self-reflection in my community, I concluded scientists cannot solve the problem themselves. The only way to solve it is massive public pressure. The only way to solve the problem is that you speak up. Say it often and say it loudly, that you’re fed up watching research funds go to waste on citation games. Ask for proposals like Robin’s to be implemented.

Because if we don’t get our act together, ten years from now someone else will write another book. And you will have to listen to the same sorry story all over again.

Thursday, December 13, 2018

New experiment cannot reproduce long-standing dark matter anomaly

Close-up of the COSINE detector  [Credits: COSINE collaboration]
To correctly fit observations, physicists’ best current theory for the universe needs a new type of matter, the so-called “dark matter.” According to this theory, our galaxy – as most other galaxies – is contained in a spherical cloud of this dark stuff. Exactly what dark matter is made of, however, we still don’t know.

The more hopeful physicists believe that dark matter interacts with normal matter, albeit rarely. If they are right, we might get lucky and see one of those interactions by closely watching samples of normal matter for the occasional bump. Dozens of experiments have looked for such interactions with the putative dark matter particles. They found nothing.

The one exception is the DAMA experiment. DAMA is located below the Gran Sasso mountains in Italy, and it has detected something starting in 1995. Unfortunately, it has remained unclear just what that something is.

For many years, the collaboration has reported excess-hits to their detector. The signal has meanwhile reached a significance of 8.9σ, well above the 5σ standard for discovery. The number of those still unexplained events varies periodically during the year, which is consistent with the change that physicists expect due to our planet’s motion around the Sun and the Sun’s motion around the galactic center. The DAMA collaboration claims their measurements cannot be explained by interactions with already known particles.

DAMA data with best-bit modulation curve.
Figure 1 from arXiv:1301.6243

The problem with the DAMA experiment, however, is that the results are incompatible with the null-results of other dark matter searches. If what DAMA sees was really dark matter, then other experiments should also have seen it, which is not the case.

Most physicists seem to assume that what DAMA measures is really some normal particle, just that the collaboration does not correctly account for signals that come, eg, from radioactive decays in the surrounding mountains, cosmic rays, or neutrinos. An annual modulation could come about by other means than our motion through a dark matter halo. Many variables change throughout the year, such as the temperature and our distance to the sun. And while DAMA claims, of course, that they have taken into account all that, their results have been met with great skepticism.

I will admit I have always been fond of the DAMA anomaly. Not only because of its high significance, but because the peak of the annual modulation fits with the idea of us flying through dark matter. It’s not all that simple to find another signal that looks like that.

So far, there has been a loophole in the argument that the DAMA-signal cannot be a dark matter particle. The DAMA detector differs from all other experiments in one important point. DAMA uses thallium-doped sodium iodide crystals, while the conflicting results come from detectors using other targets, such as Xenon or Germanium. A dark matter particle which preferably couples to specific types of atoms could trigger the DAMA detector, but not trigger the other detectors. This is not a popular idea, but it would be compatible with observation.

To test whether this is what is going on, another experiment, COSINE, set out to repeat the measurement using the same material as DAMA. COSINE is located in South Korea and has begun operation in 2016. They just published the results from the first 60 days of their measurements. COSINE did not see excess events.

Figure 2 from Nature 564, 83–86 (2018)
Data is consistent with expected background


60 days of data is not enough to look for an annual modulation, and the annual modulation will greatly improve the statistical significance of the COSINE results. So it’s too early to entirely abandon hope. But that’s certainly a disappointment.

Friday, December 07, 2018

No, negative masses have not revolutionized cosmology

Figure from arXiv:1712.07962
A lot of people have asked me to comment on a paper by Jamie Farnes, titled
Farnes is a postdoc fellow at the Oxford e-Research center and has previously worked on observational astrophysics. A few days ago, Oxford University published a press-release celebrating the publication of Farnes’ paper. This press-release was then picked up by phys.org and spread from there to a few other outlets. I have since gotten various inquiries by readers and journalists asking for comments.

In his paper, Farnes has a go at cosmology with negative gravitational masses. He wants these masses further to also have negative inertial masses, so that the equivalence principle is maintained. It’s a nice idea. I, as I am sure many other people in the field, have toyed with it. Problem is, it works really badly.

General Relativity is a wonderful theory. It tells you how masses move under the pull of gravity. You do not get to choose how they move; it follows from Einstein’s equations. These equations tell you that like masses attract and unlike masses repel. We don’t normally talk about this because for all we know there are no negative gravitational masses, but you can see what happens in the Newtonian limit. It’s the same as for the electromagnetic force, just with electric charges exchanged for masses, and – importantly – with a flipped sign.

The deeper reason for this is that the gravitational interaction is exchanged by a spin-2 field, whereas the electromagnetic force is exchanged by a spin-1 field. Note that for this to be the case, you do not need to speak about the messenger particle that is associated with the force if you quantize it (gravitons or photons). It’s simply a statement about the type of interaction, not about the quantization. Again, you don’t get to choose this behavior. Once you work with General Relativity, you are stuck with the spin-2 field and you conclude: like charges attract and unlike charges repel.

Farnes in his paper instead wants negative gravitational masses to mutually repel each other. But general relativity won’t let you do this. He notices that in section 2.3.3. where he goes on about the “counterintuitive” finding that the negative masses don’t actually seem to mutually repel.

He doesn’t say in his paper how he did the N-body simulation in which the negative mass particles mutually repel (you can tell they do just by looking at the images). Some inquiry by email revealed that he does not actually derive the Newtonian limit from the field equations, he just encodes the repulsive interaction the way he thinks it should be.

Farnes also introduces a creation term for the negative masses so he gets something akin dark energy. A creation term is basically a magic fix by which you can explain everything and anything. Once you have that, you can either go and postulate an equation of motion that is consistent with the constant creation (or whatever else you want), or you don’t, in which case you just violate energy conservation. Either way, it doesn’t explain anything. And if you are okay with introducing fancy fluids with uncommon equations of motion you may as well stick with dark energy and dark matter.

There’s a more general point to be made here. The primary reason that we use dark matter and dark energy to explain cosmological observations is that they are simple. Occam’s razor vetoes any explanation you can come up with that is more complicated than that, and Farnes’ approach certainly is not a simple explanation. Furthermore, while it is okay to introduce negative gravitational masses, it’s highly problematic to introduce negative inertial masses because this means the vacuum becomes unstable. If you do this, you can produce particle pairs from a net energy of zero in infinitely large amounts. This fits badly with our observations.

Now, look. It may be that what I am saying is wrong. Maybe the Newtonian limit is more complicated that it seems. Maybe gravity is not a spin-2 interaction. Maybe you can have mutually repulsive negative masses in general relativity after all. I would totally be in favor of that, as I have written a paper about repulsive gravity myself (it’s quoted in Farnes’ paper). I believe that negative gravitational masses are the only known solution to the (real) cosmological constant problem. But any approach that attempts to work with negative masses needs to explain how it overcomes the above mentioned problems. Farnes’ paper falls short of this.

In summary, the solution proposed by Farnes creates more problems than it solves.

Thursday, December 06, 2018

CERN produces marketing video for new collider and it’s full of lies

The Large Hadron Collider (LHC) just completed its second run. Besides a few anomalies, there’s nothing new in the data. After the discovery of the Higgs-boson, there is also no good reason for why there should be something else to find, neither at the LHC nor at higher energies, not up until 15 orders of magnitude higher than what we can reach now.

But of course there may be something, whether there’s a good reason or not. You never know before you look. And so, particle physicists are lobbying for the next larger collider.

Illustration of FCC tunnel. Screenshot from this video.

Proposals have been floating around for some while.

The Japanese, for example, like the idea of a linear collider of 20-30 miles length that would collide electrons and positrons, tentatively dubbed the International Linear Collider (ILC). The committee tasked with formulating the proposal seems to expect that the Japanese Ministry of Science and Technology will “take a pessimistic view of the project.”

Some years ago, the Chinese expressed interest in building a circular electron-positron collider (CEPC) of 50 miles circumference. Nima Arkani-Hamed was so supportive of this option that I heard it being nicknamed the Nimatron. The Chinese work in 5-year plans, but CEPC evidently did not make it on the 2016 plan.

CERN meanwhile has its own plan, which is a machine called the Future Circular Collider (FCC). Three different variants are presently under discussion, depending on whether the collisions are between hadrons (FCC-hh), electron-positions (FCC-ee), or a mixture of both (FCC-he). The plan for the FCC-hh is now subject of a study carried out in a €4 million EU-project.

This project comes with a promotional video:



The video advertises the FCC as “the world’s biggest scientific instrument” that will address the following questions:

What is 96% of the universe made of?

This presumably refers to the 96% that are dark matter and dark energy combined. While it is conceivably possible that dark matter is made of heavy particles that the FCC can produce, this is not the case for dark energy. Particle colliders don’t probe dark energy. Dark energy is a low-energy, long-distance phenomenon, the entire opposite from high-energy physics. What the FCC will reliably probe are the other 4%, the same 4% that we have probed for the past 50 years.

What is dark matter?

We have done dozens of experiments that search for dark matter particles, and none has seen anything. It is not impossible that we get lucky and the FCC will produce a particle that fits the bill, but there is no knowing it will be the case.

Why is there no more antimatter?

Because if there was, you wouldn’t be here to ask the question. Presumably this item refers to the baryon asymmetry. This is a fine-tuning problem which simply may not have an answer. And even if it has, the FCC may not answer it.

How did the universe begin?

The FCC would not tell us how the universe began. Collisions of large ions produce little blobs of quark gluon plasma, and this plasma almost certainly was also present in the early universe. But what the FCC can produce has a density some 70 orders of magnitude below the density at the beginning of the universe. And even that blob of plasma finds itself in a very different situation at the FCC than it would encounter in the early universe, because in a collider it expands into empty space, whereas in the early universe the plasma filled the whole universe while space expanded.

On the accompanying website, I further learned that the FCC “is a bold leap into completely uncharted territory that would probe… the puzzling masses of neutrinos.”

The neutrino-masses are a problem in the Standard Model because either you need right-handed neutrinos which have never been seen, or because the neutrinos are different from the other fermions, by being “Majorana-particles” (I explained this here).

In the latter case, you’re not going to find out with a particle collider; there are other experiments for that (quick summary here). In the former case, the simplest model has the masses of the right-handed neutrinos at the Planck scale, so the FCC would never see them. You can of course formulate models in which the masses are at lower energies and happen to fall into the FCC range. I am sure you can. That particle physicists can fumble together models that predict all and everything is why I no longer trust their predictions. Again, it’s not impossible the FCC would find something, but there is no good reason for why that should happen.

I am not opposed to building a larger collider. Particle colliders that reach higher energies than we probed before are the cleanest and most reliable way to search for new physics. But I am strongly opposed to misleading the public about the prospects of such costly experiments. We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.

Correction: The video in question was produced by the FCC study group at CERN and is hosted on the CERN website, but was not produced by CERN.

Friday, November 30, 2018

Do women in physics get fewer citations than men?

Yesterday, I gave a seminar about the results of a little side-project that I did with two collaborators, Tobias Mistele and Tom Price. We analyzed publication data in some sub-disciplines of physics and looked for differences in citations to papers with male and female authors. This was to follow-up on the previously noted discrepancy between the arXiv data and the Inspire data that we found when checking on the claim by Alessandro Strumia and his collaborator, Ricardo Torre.

You find our results on the slides below or you can look at the pdf here. Please be warned that the figures are not publication quality. As you will see, the labels are sometimes in awkward places or weirdly formatted. However, I think the results are fairly robust and at this point they are unlikely to change much.



The brief summary is that, after some back and forth, we managed to identify the origin of the difference between the two data sets. In the end we get a gender-difference that is not as large as Strumia et al found it to be in the Inspire data and not as small as we originally found it in the arXiv data. The male/female ratio for the citations normalized to authors is about 1.5 for both the arXiv and the Inspire data.

We then tried to find out where the difference comes from. This is not all that obvious because the particular measure that Strumia used combines various kinds of data. Eg, it depends on how frequently authors collaborate, how many papers they publish, and how much those papers are cited.

We know that the total number of citations is comparable for men and women. It turns out that part of the reason why women have a lower score when when one counts the total citations divided by the number of authors is that women write (relatively) fewer single authored papers than men.

This, however, does not explain the entire difference, because if you look at the citations per single-authored paper (ie, without summing over all papers), then women also get fewer citations.

We then looked at where those citations are (or are not) coming from, and found that both men and women cite single-authored papers with female authors at a lower frequency than you would expect from the share among the citeable papers. It turns out that in the past 20 years the trend in women-to-women citations (single-authored papers only) has gone up, while for men-to-women citations it has remained low.

It is not a huge difference, but since there are so many more men than women in those fields, the lack of citations from male authors to female authors has a big impact on the overall number of citations that women receive.

In all those analyses, we have removed authors who have not published a paper in the past 3 years or who have fewer than 5 papers in total. This is to avoid that the higher percentage of dropouts among women pulls down the female average.

One of the most-frequent questions I get when I speak about our bibliometric stuff (not only this, but also our earlier works) is what are my own scores on the various indices. I usually answer this question with “I don’t know.” We don’t dig around in the data and look for familiar names. Once we have identified all of an author’s papers, we treat authors as numbers, and besides this, you don’t normally browse data tables with millions of entries.

Having said this, I have come to understand that people ask this question to figure out what are my stakes, and if I do not respond, they think I have something to hide. Let me therefore just show you what my curve looks like if you look at the index that Strumia has considered (ie the number of citations divided by the number of authors, summed up over time) because I think there is something to learn from this.



(This is the figure from the Inspire-data.)

Besides hoping to erase the impression that I have a hidden agenda, the reason I am showing you this is to illustrate that you have to be careful when interpreting bibliometric measures. Just because someone scores well on a particular index doesn’t mean they are hugely successful. I am certainly not. I am 42 years old and have a temporary position on a contract that will run out next year. I may be many things, but successful I am not.

The reason I do well on this particular index is simply that I am an anti-social introvert who doesn’t like to work with other people. And, evidently, I am too old to be apologetic about this. Since most of my papers are single-authored, I get to collect my citations pretty much undiluted, in contrast to people who prefer to work in groups.

I have all reason to think that the measure Strumia proposes is a great measure and everyone should use it because maybe I’d finally get tenured. But if this measure became widely used, it would strongly discourage researchers from collaborating, and I do not think that would be good for science.

The take-away message is that bibliometric analysis delivers facts but the interpretation of those facts can be difficult.


This research was supported by the Foundational Questions Institute.

Monday, November 26, 2018

Away Note

I am traveling the rest of the week and off the grid for extended amounts of times. So please be advised that comments may be stuck in the queue for longer than normal.

Saturday, November 24, 2018

Book review: “The End of Science” by John Horgan

The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age
John Horgan
Basic Books; New edition (April 14, 2015)
Addison-Wesley; 1st edition (May 12, 1996)

John Horgan blogs for Scientific American and not everything he writes is terrible. At least that’s what I think. But on Twitter or Facebook, merely mentioning his name attracts hostility. By writing his 1996 book “The End of Science,” it seems, Horgan committed an unforgivable sin. It made him The Enemy of Science. Particle physicists dislike him even more than they dislike me, which is somewhat of an achievement.

I didn’t read “The End of Science,” when it appeared. But I met John last year and, contrary to expectations, he turned out to be nice guy with a big smile. And so, with 22 years delay, I decided to figure out what’s so offensive about his book.

In “The End of Science,” Horgan takes on the question whether scientific exploration has reached an insurmountable barrier, either because all knowledge that can be discovered has been discovered, or because our cognitive abilities are insufficient to make further progress. There is still much to learn from and to do with the knowledge we already have, and we will continue to use science to improve our lives, but the phase of new discoveries was temporary, so Horgan’s claim, and it is coming to an end now.

After an introductionary chapter, Horgan goes through various disciplines: Philosophy, Physics, Cosmology, Evolutionary Biology, Social Science, Neuroscience, Chaoplexity (by which he refers to studies on both chaotic and complex systems), Limitology (his name for studies about the limits of science), and Machine Science. In each case, he draws the same conclusion: Scientists have not made progress for decades, but continue to invent new theories even though those do not offer new explanations. Horgan refers to it as “ironic science”:
“Ironic science, by raising unanswerable questions, reminds us that all our knowledge is half-knowledge; it reminds us of how little we know. But ironic science does not make any significant contributions to knowledge itself.”
The book rests on interviews with key figures in the respective field. I have found those interviews to be equal parts informative and bizarre. John’s conversation partners usually end up admitting that the questions they try to answer may not be answerable. Most of the researchers he speaks with don’t want to consider the possibility that science may be nearing its end.

John does, in all fairness, come across as somewhat of an ass. Here we have a man who has no higher education in any of the disciplines he writes about, but who believes he has insights that scientists themselves fail to see. It does not make him any more likable that the descriptions of his conversation partners in some instances are little flattering. I feel lucky it’s hard to tell the state of my teeth or fingernails by email.

He does an excellent job, however, at getting across the absurdity of what can pass as research these days. I particularly enjoyed his description of workshops that amount to little more than pseudo-intellectual opinion-exchanges which frequently end in declarations of personal beliefs:
“Rössler unburdened himself of a long, tangled soliloquy whose message seemed to be that our brains represent only one solution to the multiple problems posed by the world. Evolution could have created other brains representing other solutions.

Landauer, who was strangely protective of Rössler, gently asked him whether he thought we might be able to alter our brains in order to gain more knowledge. “There is one way,” Rössler replied, staring at an invisible object on the table in front of him. “To become insane.””

The book touches on many topics that I care a lot about and I have found some of Horgan’s criticism hard to read, not because the book is badly written, but because – I guess – it’s not what I like to hear. I may have written a book about the problems in my own field, but the very reason I find the situation so frustrating is that I believe there is more to discover.

The 2015 edition of “The End of Science” has a preface in which Horgan recaps the past 20 years and emphasizes that, true to his predictions, fundamental science hasn’t moved. I got away with the impression the reason he encounters so much hostility is because no one likes people who make bad predictions and end up being right.


[You won’t be surprised to hear that I have several points of disagreement with Horgan’s argument, but want to discuss those in a separate post.]

Thursday, November 22, 2018

Guest Post: Garrett Lisi on Geometric Naturalness

Thanks to Sabine for inviting me to do a guest post. In her book, “Lost in Math,” I mentioned the criteria of “geometric naturalness” for judging theories of fundamental physics. Here I would like to give a personal definition of this and expand on it a bit.

Figure: Lowest million roots of E8+++ projected from eleven dimensions.
As physicists we unabashedly steal tools from mathematicians. And we don’t usually care where these tools come from or how they fit into the rest of mathematics, as long as they’re useful. But as we learn more mathematics, aesthetics from math influences our minds. The first place I strongly encountered this was in General Relativity. The tools of metric, covariant derivative, connection, and curvature of a manifold were irresistibly attractive structures that lured my impressionable soul into the mathematical world of differential geometry.

Studying differential geometry, I learned that the tensors I had been manipulating for physics calculations represent coordinate-invariant objects describing local maps between vector fields. That multi-dimensional integrals are really integrals over differential forms, with changing order of integration corresponding to 1-forms anti-commuting. That a spacetime metric is a local map from two vector fields to a scalar field, which can also be described using a frame – a vector-valued 1-form field mapping spacetime vectors into another vector space. And that the connection and curvature fields related to this frame completely describe the local geometry of a manifold.

With exposure to differential geometry and GR, and impressed by GR’s amazing success, I came to believe that our universe is fundamentally geometric. That ALL structures worth considering in fundamental theoretical physics are geometrically natural, involving maps between vector fields on a manifold, and coordinate-invariant operations on those maps. This belief was further strengthened by understanding Lie groups and gauge theory from a geometrically natural point of view.

Students usually first encounter Lie groups in an algebraic context, with a commutator bracket of two physical rotation generators giving a third. Then we learn that these algebraic generators correspond to tangent vector fields on a 3D Lie group manifold called SO(3). And the commutator bracket of two generators corresponds to the Lie derivative of one generator vector field with respect to the other – a natural geometric operation.

The principle of geometric naturalness in fundamental physics has its greatest success in gauge theory. A gauge theory with N-dimensional gauge Lie group, G, can be understood as an (N+4)-dimensional total space manifold, E, over 4-dimensional spacetime, M. Spacetime here is not independent, but incorporated in the total space. A gauge potential field in spacetime corresponds to a 1-form Ehresmann connection field over the total space, mapping tangent vectors into the Vertical tangent vector space of Lie group fibers, or equivalently into the vector space of the Lie algebra. The curvature of this connection corresponds to the geometry of the total space, and is physically the gauge field strength in spacetime. For physics, the action is the total space integral of the curvature and its Hodge dual.

The strong presence of geometric naturalness in fundamental physics is incontrovertible. But is it ALL geometrically natural? Things get more complicated with fermions. We can enlarge the total space of a gauge theory to include an associated bundle with fibers transforming appropriately as a fermion representation space under the gauge group. This appearance of a representation space is ad-hoc; although, with the Peter-Weyl theorem identifying representations with the excitation spectrum of the Lie group manifold, it is arguably geometric and not just algebraic. (The rich mathematics of representation theory has been greatly under-appreciated by physicists.) But the physically essential requirement that fermion fields be anti-commuting Grassmann numbers is not geometrically natural at all. Here the religion of geometric naturalness appears to have failed... So physicists strayed.

Proponents of supersymmetry wholeheartedly embraced anti-commuting numbers, working on theories in which every field, and thus every physical elementary particle, has a Grassmann-conjugate partner –  with physical particles having superpartners. But despite eager anticipation of their arrival, these superpartners are nowhere to be seen. The SUSY program has failed. Having embraced supersymmetry, string theorists are also in a bad place. And even without SUSY, string theory is geometric – involving embedding manifolds in other manifolds – but is not geometrically natural as defined above. And the string theory program has spectacularly failed to deliver on its promises.

What about the rebels? The Connes-Lott-Chamseddine program of non-commutative geometry takes the spectral representation space of fermions as fundamental, and so abandons geometric naturalness. Eric Weinstein's Geometric Unity program promotes the metric to an Ehresmann connection on a 14-dimensional total space, fully consistent with geometric naturalness, but again there are ad-hoc representation spaces and non-geometric Grassmann fields. The Loop Quantum Gravity program, ostensibly the direct descendent of GR, proceeds with the development of a fundamentally quantum description that is discrete – and thus not geometrically natural. Other rebels, including Klee Irwin's group, Wolfram, Wen, and many others, adopt fundamentally discrete structures from the start and ignore geometric naturalness entirely. So I guess that leaves me.

Enchanted with geometric naturalness, I spent a very long time trying to figure out a more natural description of fermions. In 2007 I was surprised and delighted to find that the representation space of one generation of Standard Model fermions, acted on by gravity and gauge fields, exists as part of the largest simple exceptional Lie group, E8. My critics were delighted that E8 also necessarily contains a generation of mirror fermions, which, like superparticles, are not observed.

My attempts to address this issue were unsatisfactory, but I made other progress. In 2015 I developed a model, Lie Group Cosmology, that showed how spacetime could emerge within a Lie group, with physical fermions appearing as geometrically natural, anti-coummuting orthogonal 1-forms, equivalent to Grassmann numbers in calculations. For the first time, there was a complete and geometrically natural description of fermions. But there was still the issue of mirror fermions, which Distler and Garibaldi had used in 2009 to successfully kill the theory, claiming that there was “no known mechanism by which it [any non-chiral theory] could reduce to a chiral theory.” But they were wrong.

Unbeknownst to me, Wilczek and Zee had solved this problem in 1979. (Oddly, first published in a conference proceedings edited by my graduate advisor.) I wish they'd told me! Anyway, mirror fermions can be confined (similarly to su(3) color confinement) by a so(5) inside a so(8) which, by triality, leaves exactly three generations of chiral fermions unconfined. Extending E8 to infinite-dimensional Lie groups, such as very-extended E8+++ (see Figure), this produces three generations and no mirrors. And, as I wrote in 2007, one needs to consider infinite-dimensional Lie groups anyway for a quantum description... something almost nobody talks about.

No current unified theory includes quantum mechanics fundamentally as part of its structure. But a truly unified theory must. And I believe the ultimate theory will be geometrically natural. Canonical quantum commutation relations are a Lie bracket, which can be part of a Lie group in a geometrically natural description. I fully expect this will lead to a beautiful quantum-unified theory – what I am currently working on.

I never expected to find beauty in theoretical physics. I stumbled into it, and into E8 in particular, when looking for a naturally geometric description of fermions. But beauty is inarguably there, and I do think it is a good guide for theory building. I also think it is good for researchers to have a variety of aesthetic tastes for what guides and motivates them. The high energy physics community has spent far too much time following the bandwagon of superstring theory, long after the music has stopped playing. It’s time for theorists to spread out into the vast realm of theoretical possibilities and explore different ideas.

Personally, I think the “naturalness” aesthetic of fundamental constants being near 1 is a red herring – the universe doesn’t seem to care about that. For my own guiding aesthetic of beauty, I have adopted geometric naturalness and a balance of complexity and simplicity, which I believe has served me well. If one is going to be lost, mathematics is a wonderful place to wander around. But not all those who wander are lost.

Monday, November 19, 2018

The present phase of stagnation in the foundations of physics is not normal

Nothing is moving in the foundations of physics. One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.


Some have called it a crisis. But I don’t think “crisis” describes the current situation well: Crisis is so optimistic. It raises the impression that theorists realized the error of their ways, that change is on the way, that they are waking up now and will abandon their flawed methodology. But I see no awakening. The self-reflection in the community is zero, zilch, nada, nichts, null. They just keep doing what they’ve been doing for 40 years, blathering about naturalness and multiverses and shifting their “predictions,” once again, to the next larger particle collider.

I think stagnation describes it better. And let me be clear that the problem with this stagnation is not with the experiments. The problem is loads of wrong predictions from theoretical physicists.

The problem is also not that we lack data. We have data in abundance. But all the data are well explained by the existing theories – the standard model of particle physics and the cosmological concordance model. Still, we know that’s not it. The current theories are incomplete.

We know this both because dark matter is merely a placeholder for something we don’t understand, and because the mathematical formulation of particle physics is incompatible with the math we use for gravity. Physicists knew about these two problems already in 1930s. And until the 1970s, they made great progress. But since then, theory development in the foundations of physics has stalled. If experiments find anything new now, that will be despite, not because of, some ten-thousands of wrong predictions.

Ten-thousands of wrong predictions sounds dramatic, but it’s actually an underestimate. I am merely summing up predictions that have been made for physics beyond the standard model which the Large Hadron Collider (LHC) was supposed to find: All the extra dimensions in their multiple shapes and configurations, all the pretty symmetry groups, all the new particles with the fancy names. You can estimate the total number of such predictions by counting the papers, or, alternatively, the people working in the fields and their average productivity.

They were all wrong. Even if the LHC finds something new in the data that is yet to come, we already know that the theorists’ guesses did not work out. Not. A. Single. One. How much more evidence do they need that their methods are not working?

This long phase of lacking progress is unprecedented. Yes, it has taken something like two-thousand years from the first conjecture of atoms by Democritus to their actual detection. But that’s because for most of these two-thousand years people had other things to do than contemplating the structure of elementary matter. Like, for example, how to build houses that don’t collapse on you. For this reason, quoting chronological time is meaningless. We should better look at the actual working time of physicists.

I have some numbers for you on that too. Oh, yes, I love numbers. They’re so factual.

According to membership data from the American Physical Society and the German Physical Society the total number of physicists has increased by a factor of roughly 100 between the years 1900 and 2000.* Most of these physicists do not work in the foundations of physics. But for what publication activity is concerned the various subfields of physics grow at roughly comparable rates. And (leaving aside some bumps and dents around the second world war) the increase in the number of publications as well as in the number of authors is roughly exponential.

Now let us assume for the sake of simplicity that physicists today work as many hours per week as they did 100 years ago – the details don’t matter all that much given that the growth is exponential. Then we can ask: How much working time starting today corresponds to, say, 40 years working time starting 100 years ago. Have a guess!

Answer: About 14 months. Going by working hours only, physicists today should be able to do in 14 months what a century earlier took 40 years.

Of course you can object that progress doesn’t scale that easily, for despite all the talk about collective intelligence, research is still done by individuals. This means processing time can’t be decreased arbitrarily by simply hiring more people. Individuals still need time to exchange and comprehend each other’s insights. On the other hand, we have also greatly increased the speed and ease of information transfer, and we now use computers to aid human thought. In any case, if you want to argue that hiring more people will not aid progress, then why hire them?

So, no, I am not serious with this estimate, but I it explains why the argument that the current stagnation is not unprecedented is ill-informed. We are today making more investments into the foundations of physics than ever before. And yet nothing is coming out of it. That’s a problem and it’s a problem we should talk about.

I’ve recently been told that the use of machine learning to analyze LHC data signals a rethinking in the community. But that isn’t so. To begin with, particle physicists have used machine learning tools to analyze data for at least three decades. They use it more now because it’s become easier, and because everyone does it, and because Nature News writes about it. And they would have done it either way, even if the LHC would have found new particles. So, no, machine learning in particle physics is not a sign of rethinking.

Another comment-not-a-question I constantly have to endure is that I supposedly only complain but don’t have any better advice for what physicists should do.

First, it’s a stupid criticism that tells you more about the person criticizing than the person being criticized. Consider I was criticizing not a group of physicists, but a group of architects. If I inform the public that those architects spent 40 years building houses that all fell to pieces, why is it my task to come up with a better way to build houses?

Second, it’s not true. I have spelled out many times very clearly what theoretical physicists should do differently. It’s just that they don’t like my answer. They should stop trying to solve problems that don’t exist. That a theory isn’t pretty is not a problem. Focus on mathematically well-defined problems, that’s what I am saying. And, for heaven’s sake, stop rewarding scientists for working on what is popular with their colleagues.

I don’t take this advice out of nowhere. If you look at the history of physics, it was working on the hard mathematical problems that led to breakthroughs. If you look at the sociology of science, bad incentives create substantial inefficiencies. If you look at the psychology of science, no one likes change.

Developing new methodologies is harder than inventing new particles in the dozens, which is why they don’t like to hear my conclusions. Any change will reduce the paper output, and they don’t want this. It’s not institutional pressure that creates this resistance, it’s that scientists themselves don’t want to move their butts.

How long can they go on with this, you ask? How long can they keep on spinning theory-tales?

I am afraid there is nothing that can stop them. They review each other’s papers. They review each other’s grant proposals. And they constantly tell each other that what they are doing is good science. Why should they stop? For them, all is going well. They hold conferences, they publish papers, they discuss their great new ideas. From the inside, it looks like business as usual, just that nothing comes out of it.

This is not a problem that will go away by itself.


If you want to know more about what is going wrong with the foundations of physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”


* That’s faster than the overall population growth, meaning the fraction of physicists, indeed of scientists of general, has increased.

Friday, November 16, 2018

New paper claims that LIGO’s gravitational wave detection from a neutron star merger can’t be right


Two weeks ago, New Scientist warmed up the story about a Danish groups’ claim that the LIGO collaboration’s signal identification is flawed. This story goes back to a paper published in Summer 2017.

After the publication of this paper, however, the VIRGO gravitational wave interferometer came online, and in August 2017 the both collaborations jointly detected another event. Not only was this event seen by the two LIGO detectors and the VIRGO detector, several telescopes also measured optical signals that arrived almost simultaneously and fit with the hypothesis of the event being a neutron-star merger. For most physicists, including me, this detection removed any remaining doubts about LIGO’s event-detection.

Now a few people have pointed out to me that the Journal of Cosmology and Astroparticle Physics (JCAP) recently published a paper by an Italian group which claims that the gravitational wave signal of the neutron-star merger event must be fishy:

    GRB 170817A-GW170817-AT 2017gfo and the observations of NS-NS, NS-WD and WD-WD mergers
    J.A. Rueda et al
    JCAP 1810, 10 (2018), arXiv:1802.10027 [astro-ph.HE]

The executive summary of the paper is this. They claim that the electromagnetic signal does not fit with the hypothesis that the event is a neutron-star merger. Instead, they argue, it looks like a specific type of white-dwarf merger. A white-dwarf merger, however, would not result in a gravitational wave signal that is measurable by LIGO. So, they conclude, there must be something wrong with the LIGO event. (The VIRGO measurement of that event has a signal-to-noise ratio of merely two, so it doesn’t increase the significance all that much.)

I am not much of an astrophysicist, but I know a few things about neutron stars, most notably that it’s more difficult to theoretically model them than you may think. Neutron stars are not just massive balls that sit in space. They are rotating hot balls of plasma with pressure gradients that induce various phases of matter. And the equation of state of nuclear matter in the relevant ranges is not well-understood. There’s tons of complex and even chaotic dynamics going on. In short, it’s a mess.

In contrast to this, the production of gravitational waves is a fairly well-understood process that does not depend much on exactly what the matter does. Therefore, the conclusion that I would draw from the Italian paper is that we are misunderstanding something about neutron stars. (Or at least they are.)

But, well, as I said, it’s not my research area. JCAP is a serious journal, and the people who wrote the paper are respected astrophysicists. It’s not folks you can easily dismiss. So I decided to look into this a bit.

First, I contacted the spokesperson of the LIGO collaboration, David Shoemaker. This is still the same person who last year answered my question what the collaboration’s response to the Danish criticism is by merely stating he has full confidence in LIGO’s results. Since the Danish group raised the concern that the collaboration suffers from confirmation bias, this did little to ease my worries.

This time I asked Shoemaker for a comment on the Italian groups’ new claim that the LIGO measurement conflicts with the optical measurements. Turns out that his replies landed in my junk folder until I publicly complained about the lack of response, which prompted him to try a different email account. Please see update below.

Secondly, I noticed that the first version of the Italian group’s paper that is available on the arXiv heavily referenced the Danish group.


Curiously enough, these references seem to have entirely disappeared from the published version. I therefore contacted Andrew Jackson from the Danish group to hear if he has something to say about the Italian group’s claims and whether he’d heard of them. He didn’t respond.

Third, I contacted the corresponding author of the Italian paper, Jorge Rueda, but he did not correspond with me. I then moved on to the paper’s second author Remo Ruffini, which was more fruitful. According to Wikipedia, Ruffini is director of the International Centre for Relativistic Astrophysics Network and co-author of 21 textbooks about astrophysics and gravity.

I asked Ruffini whether he had been in contact with the LIGO collaboration about their findings on the neutron star merger. Ruffini did not respond to this question, though I asked repeatedly. When I asked whether they have any reason to doubt the LIGO detection, Ruffini referred me to (you’ll love this) the New Scientist article.

I subsequently got Ruffini’s permission to quote his emails, so let me just tell you what he wrote in his own words:

“Dear Sabine not only us but many people are questioning the Ligo People as you see in this link: the drama is of public domain. Remo Ruffini”

Michael Brooks, btw, who wrote the New Scientist article knew about the story because I had written about it earlier, so it has now gone around a full circle. After I informed Ruffini that I write a blog he told me that:

“we are facing the greatest dramatic disaster in all scientific world since Galileo. Do propagate this dramatic message to as many people as possible.”

Yo.

Update: Here is the response from Shoemaker that Google pushed in the junk folder (not sure why). I am sorry I complained about the lack of response without checking the junk folder - my bad.

He points out that there is a consensus in the community that the gravitational wave event in question can be explained as a neutron-star merger. (Well, I guess it’s a consensus if you disregard the people who do not consent.) He also asks me to mention (as I did earlier) that the data of the whole first observing run is available online. Alas, this data does not include the 2017 event that is under discussion here. For this event only a time-window is available. But for all I can tell, the Italians did not even look at that data.

Basically, I feel reassured in my conclusion that you can safely ignore the Italian paper.

2nd Update: The non-corresponding corresponding author of the Italian paper has now popped up after being alerted about this blogpost. He refuses to comment on his co-author’s claims that LIGO is wrong and the world needs to be told. Having said this, I wish all these people would sort out their issues without me.