94414-14227-00,www.lorcaresort.com,Yamaha,Rim,Made,/anthophore5522426.html,$60,Automotive , Motorcycle Powersports , Parts,by,Yamaha,944141422700,1.40X14;$60 Yamaha 94414-14227-00 Rim 1.40X14; 944141422700 Made by Yamaha Automotive Motorcycle Powersports Parts 94414-14227-00,www.lorcaresort.com,Yamaha,Rim,Made,/anthophore5522426.html,$60,Automotive , Motorcycle Powersports , Parts,by,Yamaha,944141422700,1.40X14; Yamaha 94414-14227-00 Rim 1.40X14; by Made Beauty products 944141422700$60 Yamaha 94414-14227-00 Rim 1.40X14; 944141422700 Made by Yamaha Automotive Motorcycle Powersports Parts Yamaha 94414-14227-00 Rim 1.40X14; by Made Beauty products 944141422700

# Yamaha 94414-14227-00 Rim 1.40X14; 944141422700 Made by Yamaha

\$60

## Yamaha 94414-14227-00 Rim 1.40X14; 944141422700 Made by Yamaha

This fits your .
• Make sure this fits by entering your model number.
• New OEM Yamaha Rim 1.40X14
• OEM Part Number: 944-14142-27-00 (QTY 1) | Previous: 944-14143-04-00
• Item only fits specific models listed. The picture could be generic.
• Message us with VIN/HULL for fast fitment verification.
|||

## Product description

Fits:
Please note that we currently cannot display fitment from the Manufacturer at this time.

## Saturday, November 13, 2021

### Why can elementary particles decay?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Physicists have so far discovered twenty-five elementary particles that, for all we currently know, aren’t made up of anything else. Most of those particles are unstable, and they’ll decay to lighter particles within fractions of a second. But how can it possibly be that a particle which decays is elementary. If it decays doesn’t this mean it was made up of something else? And why do particles decay in the first place? At the end of this video, you’ll know the answers.

The standard model of particle physics contains 25 particles. But the matter around us is almost entirely made up of only half of them. First, there’s the electron. Then there are the constituents of atomic nuclei, the neutrons and protons, which are made of different combination of up and down quarks. That’s 3. And those particles are held together by photons and the 8 gluons of the strong nuclear force. So that’s twelve.

What’s with the other particles? Let’s take for example the tau. The tau is very similar to the electron, except it’s heavier by about a factor 4000. It’s unstable and has a lifetime of only three times ten to the minus thirteen seconds. It then decays, for example into an electron, a tau-neutrino and an electron anti-neutrino. So is the tau maybe just made up of those three particles. And when it decays they just fly apart?

But no, the tau isn’t made up of anything, at least not according to all the observations that we currently have. There are several reasons physicists know this.

First, if the tau was made up of those other particles, you’d have to find a way to hold them together. This would require a new force. But we have no evidence for such a force. For more about this, check out my video about fifth forces.

Second, even if you’d come up with a new force, that wouldn’t help you because the tau can decay in many different ways. Instead of decaying into an electron, a tau-neutrino and an electron anti-neutrino, it could for example decay into a muon, a tau-neutrino and a muon anti-neutrino. Or it could decay into a tau-neutrino and a pion. The pion is made up of two quarks. Or it could decay into a tau-neutrino and a rho. The rho is also made up of two quarks, but different ones than the pion. And there are many other possible decay channels for the tau.

So if you’d want the tau to be made up of the particles it decays into, at the very least there’d have to be different tau particles, depending on what they’re made up of. But we know that that this can’t be. The taus are exactly identical. We know this because if they weren’t, they’d themselves be produced in larger numbers in particle collisions than we observe. The idea that there are different versions of taus is therefore just incompatible with observation.

This, by the way, is also why elementary particles can’t be conscious. It’s because we know they do not have internal states. Elementary particles are called elementary because they are simple. The only way you can assign any additional property to them, call that property “consciousness” or whatever you like, is to make that property entirely featureless and unobservable. This is why panpsychism which assigns consciousness to everything, including elementary particles, is either bluntly wrong – that’s if the consciousness of elementary particles is actually observable, because, well, we don’t observe it – or entirely useless – because if that thing you call consciousness isn’t observable it doesn’t explain anything.

But back to the question why elementary particles can decay. A decay is really just a type of interaction. This also means that all these decays in principle can happen in different orders. Let’s stick with the tau because you’ve already made friends with it. That the tau can decay into the two neutrinos and an electron just means that those four particles interact. They actually interact through another particle, with is one of the vector bosons of the weak interaction. But this isn’t so important. Important is that this interaction could happen in other orders. If an electron with high enough energy runs into a tau neutrino, that could for example produce a tau and an electron neutrino. In that case what would you think any of those particles are “made of”? This idea just doesn’t make any sense if you look at all the processes that we know of that taus are involved in.

Everything that I just told you about the tau works similarly for all of the other unstable particles in the standard model. So the brief answer to the question why elementary particles can decay is that decay doesn’t mean the decay products must’ve been in the original particle. A decay’s just a particular type of interaction. And we’ve no observations that’d indicate elementary particles are made up of something else; they have no substructure. That’s why we call them elementary.

But this brings up another question, why do those particles decay to begin with? I often come across the explanation that they do this to reach the state of lowest energy because the decay products are lighter than the original. But that doesn’t make any sense because energy is conserved in the decay. Indeed, the reason those particles decay has nothing to do with energy, it has all to do with entropy.

Heavy particles decay simply because they can and because that’s likely to happen. As Einstein told us, mass is a type of energy. Yes, that guy again. So a heavy particle can decay into several lighter particles because it has enough energy. And the rest of the energy that doesn’t go into the masses of the new particles goes into the kinetic energy of the new particles. But for the opposite process to happen, those light particles would have to meet in the right spot with a sufficiently high energy. This is possible, but it’s very unlikely to happen coincidentally. It would be a spontaneous increase of order, so it would be an entropy decrease. That’s why we don’t normally see it happening, just like we don’t normally see eggs unbreak. To sum it up: Decay is likely. Undecay unlikely.

It is worth emphasizing though that the reverse of all those particle-decay processes indeed exists and it can happen in principle. Mathematically, you can reverse all those processes, which means the laws of nature are time-reversible. Like a movie, you can run them forwards and backwards. It’s just that some of those processes are very unlikely to occur in the word we actually inhabit, which is why we experience our life with a clear forward direction of time that points towards more wrinkles.

## Friday, November 12, 2021

### New book now available for pre-order

In the past years I have worked on a new book, which is now available for pre-order here (paid link). My editors decided on the title "Existential Physics: A Scientist's Guide to Life's Biggest Questions" which, I agree, is more descriptive than my original title "More Than This". My title was trying to express that physics is about more than just balls rolling down inclined planes and particles bumping into each other. It's a way to make sense of life.

In "Existential Physics" each chapter is the answer to a question. I have also integrated interviews with Tim Palmer, David Deutsch, Roger Penrose, and Zeeya Merali, so you don't only get to hear my opinion. I'll show you a table of contents when the page proofs are in. I want to remind you that comments have moved over to my Patreon page.

## Saturday, November 06, 2021

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Plastic is everywhere, and we have all heard it’s bad for the environment because it takes a long time to biodegrade. But is this actually true? If I look at our outside furniture, that seems to biodegrade beautifully. How much should we really worry about all that plastic? Did you know that most bioplastics aren’t biodegradable? And will we end up driving cars made of soybeans? That’s what we will talk about today.

Pens, bags, cups, trays, toys, shoe soles and wrappers for everything – it’s all plastic. Those contact lenses that I’m wearing? Yeah, that’s plastic too.

The first plastic was invented in nine-teen-0-seven by the chemist Leo Baekeland. Today we use dozens of different plastics. They’re synthetic materials, molecules that just didn’t exist before humans began producing them. Plastics usually have names starting with “poly” like polyethylene, polypropylene, or polyvinyl chloride. The poly is sometimes hidden in abbreviations like PVC or PET.

You probably know the prefix “poly” from “polymer”. It means “many” and tells you that the molecules in plastic are long, repetitive chains. These long chains are the reason why plastics can be easily molded. And because plastics can be quickly mass-produced in custom shapes, they’ve become hugely popular. Today, more than twenty thousand plastic bottles are produced – each second. That’s almost two billion a day! Chewing gum by the way also contains plastic.

Those long molecular chains are also the reason why plastic is so durable, because bacteria that evolved to break down organic materials can’t digest plastic. So how long does plastic last? Well, we can do our own research, so let’s ask Google. Actually we don’t even have to do that ourselves, because just a year ago, a group of American scientists searched for public information on plastic lifetime and wrote a report for the NAS about it.

For some cases, like Styrofoam, they found lifetimes varying from one year to one thousand years to forever. For fishing lines, all thirty-seven websites they found said it lasts six-hundred years, probably because they all copied from each other. If those websites list a source at all, it’s usually a website of some governmental or educational institution. The most often named one is NOAA, the National Oceanic and Atmospheric Administration in the United States. When the researchers contacted NOAA they learned that the numbers on their website are estimates and not based on peer-reviewed science.

Fact is, no one has any good idea how long plastics last in the environment. The studies which have been done, often don’t list crucial information such as exposure to sunlight, temperature, or size and shape of the sample, so it’s unclear what those numbers mean in real life. Scientists don’t even have an agreed-upon standard for what “degradation of plastic” is.

If anything, then recent peer-reviewed literature suggests that plastic in the environment may degrade faster than previously recognized, not because of microbes but because of sunlight. For example, a paper published by a group from Massachusetts found that polystyrene, one of the world’s most ubiquitous plastics, may degrade in a couple of centuries when exposed to sunlight, rather than thousands of years as previously thought. That plastic isn’t as durable as once believed is also rapidly becoming a problem for museums who see artworks of the last century crumbling away.

But why do we worry about the longevity of plastic to begin with? Plastics are made from natural gas or oil, which is why burning them is an excellent source of energy, but has the same problem as burning oil and gas – it releases carbon dioxide which has recently become somewhat unpopular. Plastic can in principle be recycled by shredding and re-molding it, but if you mix different types of plastics the quality degrades rapidly, and in practice the different types are hard to separate.

And so, a lot of plastic trash ends up in landfills or in places where it doesn’t belong, makes its way into rivers and, eventually, into the sea. According to a study by the Ellen Macarthur Foundation, there are more than one-hundred fifty million tons plastic trash in the oceans already, and we add about 10 million tons each year. Most of that plastic sinks to the ground, but more than 250000 tons keep floating on the surface.

The result is that a lot of wildlife, birds and fish in particular, gets trapped in plastic trash or swallows it. According to a 2015 estimate from researchers in Australia and the UK, over ninety percent of seabirds now have plastic in their guts. That’s bad. Swallowing plastic cannot only physically block parts of the digestive system, a lot of plastics also contain chemicals to keep them soft and stable. Many of those chemicals are toxic and they’ll leak into the animals.

Okay you may say who cares about seabirds and fish. But the thing is, once you have a substance in the food chain, it’ll spread through the entire ecosystem. As it spreads, the plastic gets broken down into smaller and smaller pieces, eventually down to below micrometer size. Those are the so-called microplastics. From animals, they make their way into supermarkets, and from there back into the canalization and on into other parts of the environment from where they return to us, and so on. Several independent studies have shown that most of us now quite literally shit plastic.

What are the consequences? No one really knows.

We do know that microplastics are fertile ground for pathogenic bacteria, which isn’t exactly what you want to eat. But of course other microparticles, for example those stemming from leaves or rocks, have that problem too, and we probably eat some of those as well. Indeed, in 2019 a group of Chinese researchers studied bacteria on different microparticles, and they found that the amount of bacteria on microplastics was less than that on micoparticles from leaves. That’s because leaves are organic and deteriorate faster, which provides more nutrients for the bacteria. It’s presently unclear whether eating microplastics is a health hazard.

But some of those microplastics are so small they circulate in the air together with other dust and we regularly breathe them in. Studies have found that at least in cell-cultures, those particles are small enough to make it into the lymphatic and circulatory system. But how much this happens in real life and to what extent this may lead to health problems hasn’t been sorted out. Though we know from several occupational studies that workers processing plastic fibers, who probably breathe in microplastics quite regularly, are more likely to have respiratory problems than the general population. The problems include a reduced lung capacity and coughing. The data for lung cancer induced by breathing microplastics is inconclusive.

Basically we’ve introduced an entirely new substance into the environment and are now finding out what consequences this has.

That problem isn’t new. As Jordi Busque has pointed out, planet Earth had this problem before, namely, when all that coal formed which we’re now digging back up. This happened during a period called the carboniferous which lasted from three-hundred sixty to sixty million years ago. It began when natural selection “invented” for the first time wood trunks with bark, which requires a molecule called lignin. But, no bug, bacteria, or fungus around at that time knew how to digest lignin. So, when trees died, their trunks just piled up in the forests and, over millions of years, they were covered by sediments and turned into coal. The carboniferous ended when evolution created fungi that were able to eat and biodegrade lignin.

Now, the carboniferous lasted 300 million years but maybe we can speed up evolution a bit by growing bacteria that can digest plastics. Why not? There’s nothing particularly special about plastics that would make this impossible.

Indeed, there are already bacteria which have learned to digest plastic. In twenty-sixteen a group of Japanese scientists published a paper in Science magazine, in which they reported the discovery of a bacterium that degrades PET, which is the material most plastic bottles are made of. They found it while they were analyzing sediment samples from nearby a plastic recycling facility. They also identified the enzyme that enables the bacteria to digest plastic and called it PETase.

The researchers found that thanks to PETase, the bacterium converts PET into two environmentally benign components. Moreover 75 percent of the resulting products are further transformed into organic matter by other microorganisms. That, plus carbon-dioxide. As I said in my earlier video about carbon capture, plastics are basically carbon storage, so maybe we should actually be glad that they don’t biodegrade?

But in 2018, a British team accidentally modified PETase making it twenty percent faster at degrading PET, and by 2020 scientists from the University of Portsmouth had found a way to speed up the PET digestion by a factor of six. Just this year, researchers from Germany, France and Ireland used another enzyme which found in a compost pile to degrade PET.

And the French startup Carbios has developed another bacterium that can almost completely digest old plastic bottles in just a few hours. They are building a demonstration factory that will use the enzymes to takes plastic polymers apart into monomers, which can then be polymerized again to make new bottles. The company says it will open a full-scale factory in twenty-twenty-four with a goal of producing the ingredients for forty thousand tons of recycled plastic each year.

The problem with this idea is that the PET used in bottles is highly crystalline and very resistant to enzyme degradation. So if you want the enzymes to do their work, you first have to melt the plastic and extrude it. That requires a lot of energy. For this reason, bacterial PET digestion doesn’t currently make a lot of sense neither economically nor ecologically. But it demonstrates that it’s a real possibility that plastics will just become biodegradable because bacteria evolve to degrade them, naturally or by design.

What’s with bioplastics? Unfortunately, bioplastics look mostly like hype to me.

Bioplastics are plastics produced from biomass. This isn’t a new idea. For example, celluloid, the material of old films, was made from cellulose, an organic material. And in nineteen 41 Ford built a plastic car made from soybeans. Yes, soybeans. Today we have bags made from potatoes or corn. That certainly sounds very bio, but unfortunately, according to a review by scientists from Georgia Southern University that came out just in April, about half of bioplastics are not biodegradable.

How can it possibly be that potato and corn isn’t biodegradable? Well, the potato or corn is biodegradable. But, to make the bioplastics, one uses the potatoes or the corn to produce bioethanol and from the bioethanol you produce plastic in pretty much the same way you always do. The result is that the so-called bioplastics are chemically pretty much the same as normal plastics.

So about half of bioplastics aren’t biodegradable. And most of the ones that are, biodegrade only in certain conditions. This means they have to be sent to industrial compost facilities that have the right conditions of temperature and pressure. If you just trash them they will end up in landfill or migrate into the sea like any other plastic. A paper by researchers from Michigan State University found no difference in degradation when they compared normal plastics with these supposedly biodegradable ones.

So the word “bioplastic” is very misleading. But there are some biodegradable bioplastics. For example Mexican scientists have produced a plastic out of certain types of cacti. It naturally degrades in a matter of months. Unfortunately, there just aren’t enough of those cacti to replace plastic that way.

More promising are PHAs, that are a family of molecules that evolved for certain biological functions and that can be used to produce plastics that actually do biodegrade. Several companies are working on this, for example Anoxkaldnes, Micromidas, and Mango Materials. Mango Materials. Seriously?

Researchers from the University of Queensland in Australia have estimated that a bottle of PHA in the ocean would degrade in one and a half to three and a half years, and a thin film would need 1 to 2 months. Sounds good! But at present PHA is difficult to produce and therefore 2 to 4 times more expensive than normal plastic. And let’s not forget that the faster a material biodegrades the faster it returns its carbon dioxide into the atmosphere. So what you think is “green” might not be what I think is “green”.

Isn’t there something else we can do with all that plastic trash? Yes, for example make steel. If you remember, steel is made from iron and carbon. The carbon usually comes from coal. But you can instead use old plastic, remember the stuff’s made of oil. In a paper that appeared in Nature Catalysis last year, a group of researchers from the UK explained how that could work. Use microwaves to convert the plastic into hydrogen and carbon. Use the hydrogen to convert iron oxides into iron, and then combine it with the carbon to get steel.

Personally I’d prefer steel from plastic over cars of non-biodegradable so-called bioplastics, but maybe that’s just me. Let me know in the comments what you think, I’m curious. Don’t forget to like this video and subscribe if you haven’t already, that’s the easiest way to support us. See you next week.

## Saturday, October 30, 2021

### The delayed choice quantum eraser, debunked

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

A lot of you have asked me to do a video about the delayed choice quantum eraser, an experiment that supposedly rewrites the past. I haven’t done that simply because there are already lots of videos about it, for example Matt from PBS Space-time, the always amazing Joe Scott, and recently also Don Lincoln from Fermilab. And how many videos do you really need about the same thing if that thing isn’t a kitten in a box. However, having watched all those gentlemen’s videos about quantum erasing, I think they’re all wrong. The quantum eraser isn’t remotely as weird as you think, doesn’t actually erase anything, and certainly doesn’t rewrite the past. And that’s what we’ll talk about today.

Let’s start with a puzzle that has nothing to do with quantum mechanics. Peter is forty-six years old and he’s captain of a container ship. He ships goods between two places that are 100 kilometers apart, let’s call them A and B. He starts his round trip at A with the ship only half full. Three-quarters of the way to B he adds more containers to fill the ship, which slows him down by a factor of two. On the return trip, his ship is empty. How old is the captain?

If you don’t know the answer, let’s rewind this question to the beginning.

Peter is forty-six years old. The answer’s right there. Everything I told you after that was completely unnecessary and just there to confuse you. The quantum eraser is a puzzle just like this.

The quantum eraser is an experiment that combines two quantum effects, interference and entanglement. Interference of quantum particles can itself be tested by the double slit experiment. For the double slit experiment you shoot a coherent beam of particles at a plate with two thin openings, that’s the double slit. On the screen behind it, you then observe several lines, usually five or seven, but not two. This is an interference pattern created by overlapping waves. When a crest meets a trough, the waves cancel and that makes a dark spot on the screen. When crest meets crest they add up and that makes a bright spot.

The amazing thing about the double slit is that you get this pattern even if you let only one particle at a time pass through the slits. This means that even single particles act like waves. We therefore describe quantum particles with a wave-function, usually denoted psi. The interesting thing about the double-slit experiment is that if you measure which slit the particles go through, the interference pattern disappears. Instead the particles behave like particles again and you get two blobs, one from each of the slits.

Well, actually you don’t. Though you’ve almost certainly seen that elsewhere. Just because you know which slit the wave-function goes through doesn’t mean it stops being a wave-function. It’s just no longer a wave-function going through two slits. It’s now a wave-function going through only one slit, so you get a one-slit diffraction pattern. What’s that? That’s also an interference pattern but a fuzzier one and indeed looks mostly like a blob. But a very blurry blob. And if you add the blobs from the two individual slits, they’ll overlap and still pretty much look like one blob. Not, as you see in many videos two cleanly separated ones.

You may think this is nitpicking, but it’ll be relevant to understanding the quantum eraser, so keep this in mind. It’s not so relevant for the double slit experiment, because regardless of whether you think it’s one blob or two, the sum of the images from both separate slits is not the image you get from both slits together. The double slit experiment therefore shows that in quantum mechanics, the result of a measurement depends on what you measure. Yes, that’s weird.

The other ingredient that you need for the quantum eraser is entanglement. I have talked about entanglement several times previously, so let me just briefly remind you: entangled particles share some information, but you don’t know which particle has which share until you measure it. It could be for example that you know the particles have a total spin of zero, but you don’t know the spin of each individual particle. Entangled particles are handy because they allow you to measure quantum effects over large distances which makes them super extra weird.

Okay, now to the quantum eraser. You take your beam of particles, usually photons, and direct it at the double slit. After the double slit you place a crystal that converts each single photon into a pair of entangled photons. From each pair you take one and direct it onto a screen. There you measure whether they interfere. I have drawn the photons which come from the two different places in the crystal with two different colors. But this is just so it’s easier to see what’s going on, these photons actually have the same color.

If you create these entangled pairs after the double slit, then the wave-function of the photon depends on which slit the photons went through. This information comes from the location where the pairs were created and is usually called the “which way information”. Because of this which-way information, the photons on the screen can’t create an interference pattern.

What’s with the other side of the entangled particles? That’s where things get tricky. On the other side, you measure the particles in two different ways. In the first case, you measure the which-way information directly, so you have two detectors, let’s call them D1 and D2. The first detector is on the path of the photons from the left slit, the second detector on the path of the photons from the right slit. If you measure the photons with detectors D1 and D2, you see no interference pattern.

But alternatively you can turn off the first two detectors, and instead combine the two beams in two different ways. These two white bars are mirrors and just redirect the beam. The semi-transparent one is a beam splitter. This means half of the photons go through, and the other half is reflected. This looks a little confusing but the point is just that you combine the two beams so that you no longer know which way the photon came. This is the “erasure” of the “which way information”. And then you measure those combined beams in detectors D3 and D4. A measurement on one of those two detectors does not tell you which slit the photon went through.

Finally, you measure the distribution of photons on the screen that are entangled partners of those photons that went to D3. These photons create an interference pattern. You can alternatively measure the distribution of photons on the screen that are partner particles of those photons that went to D4. Those will also create an interference pattern.

This is the “quantum erasure”. It seems you’ve managed to get rid of the which way information by combining those paths, and that restores the interference pattern. In the delayed choice quantum eraser experiment, the erasure happens well after the entangled partner particle hit the screen. This is fairly easy to do just by making the paths of those photons long enough.

If you watch the other videos about this experiment on YouTube, they’ll now go on to explain that this seems to imply that the choice of what you measure on the one side of the experiment decides what happened on the other side before you even made that choice. Because the photons must have known whether to interfere or not before you decided whether to erase the which-way information. But this is clearly nonsense. Because, let’s rewind this explanation to the beginning.

The photons on the screen can’t create an interference pattern. Everything I told you after this is completely irrelevant. It doesn’t matter at all what you do on the other side of the experiment. The photons on the screen will always create the same pattern. And it’ll never be an interference pattern.

Wait. Didn’t I just tell you that you do get an interference pattern if you use detectors D3 and D4? Indeed. But I’ve omitted a crucial part of the information which is missing in those other YouTube videos. It’s that those interference patterns are not the same. And if you add them, you get exactly the same as you get from detectors 1 and 2. Namely these two overlapping blurry blobs. This is why it matters that you know the combined pattern of two single slits doesn’t give you two separate blobs, as they normally show you.

What you actually do in the eraser experiment, is that you sample the photon pairs in two groups. And you do that in two different ways. If you use detector 1 and 2 you sample them so that the entangled partners on the screen do not create an interference pattern for each detector separately. If you use detector 3 and 4, they each separately create an interference pattern but together they don’t.

This means that the interference pattern really comes from selectively disregarding some of the particles. That this is possible has nothing to do with quantum mechanics. I could throw coins on the floor and then later decide to disregard some of those and create any kind of pattern. Clearly this doesn’t rewrite the past.

This by the way has nothing to do with the particular realization of the quantum eraser experiment that I’ve discussed. This experiment has been done in a number of different ways, but what I just told you is generally true, these interference patterns will always combine to give the original non-interference pattern.

This is not to say that there is nothing weird going on in this experiment. But what’s weird about it is the same thing that’s weird already about the normal double slit experiment. Namely, if you look at the wave-function of a single particle, then that distributes in space. Yet when you measure it, the particle is suddenly in one particular place, and the result must be correlated throughout space and fit to the measurement setting. I actually think the bomb experiment is far weirder than the quantum eraser. Check out my earlier video for more on that.

When I was working on this video I thought certainly someone must have explained this before. But the only person I could find who’d done that is… Sean Carroll in a blogpost two years ago. Yes, you can trust Sean with the quantum stuff. I’ll leave you a link to Sean’s piece in the info.

## Wednesday, October 27, 2021

Many of you have sent me notes asking what happened to the comments. Comments are permanently off on this blog. I just don't have the time to deal with it. In all honesty, since I have turned them off my daily routine has considerably improved, so they'll remain off. If you've witnessed the misery in my comment sections, you probably saw this coming.

This problem has been caused not so much by commenters themselves, as by Google's miserable commenting platform that doesn't allow blocking or managing problematic people in any way. It adds to this that the threaded comments are terrible to read and that you have to know to click on "LOAD MORE" after 200 comments to even read all replies is a remarkably shitty piece of coding.

I am genuinely sorry about this development because over the years I have come to value the feedback from many of you and I feel like I've lost some friends now. At some point I want to move this blog to a different platform and also write some other stuff again, rather than just posting transcripts. But at the moment I don't have the time.

Having said that, I will from now on cross-post transcripts of my videos at MTGNEEY Cartoon Bee Pattern Auto Center Console Pad Car Armrest, where you can interact with me and other Patreons for as little as 2 Euro a month. Hope to see you there.

## Saturday, October 23, 2021

### Does Captain Kirk die when he goes through the transporter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Does Captain Kirk die when he goes through the transporter? This question has kept me up at night for decades. I’m not kidding. And I still don’t have an answer. So this video isn’t going to answer the question, but I will explain why it’s more difficult than you may think. If you haven’t thought about this before, maybe pause the video for a moment and try to make up your mind. Do you think Kirk dies when he goes through the transporter? Let me know if at the end of this video you’ve changed your mind.

So how does the transporter work? The idea is that the person who enters a transporter is converted into an energy pattern that contains all the information. That energy can be sent or “beamed” at the speed of light. And once it’s arrived at its final destination, it can be converted back-into-the-person.

Now of course energy isn’t something in and by itself. Energy, like momentum or velocity is a property of something. This means the beam has to be made of something. But that doesn’t really matter for the purpose of transportation, it only matters that the beam can contain all the information about the person and it can be sent much faster and much easier than you could send the person in its material form.

Current technology is far, far away from being able to read out all the information that’s necessary to build up a human being from elementary particles. And even if we could do that, it’d take ridiculously long to send that information anywhere. According to a glorious paper by a group of students from the University of Leicester, assuming a bandwidth of about 30 Giga Hertz, just sending the information of a single cell would take more than 10^15 years, and that’s not counting travel time. Just for comparison, the age of the universe is about 10^10 years. So, even if you increase the bandwidth by a quadrillion, it’d still take at least a year just to move a cell one meter to the left.

Clearly we’re not going build a transporter isn’t going to happen any time soon, but from the perspective of physics there’s no reason why it should not be possible. I mean, what makes you you is not a particular collection of elementary particles. Elementary particles are identical to each other. What makes you you is the particular arrangement of those particles. So why not just send that information instead of all the particles? That should be possible.

And according to the best theories that we currently have, that information is entirely contained in the configuration of the particles at any one moment in time. That’s just how the laws of nature seem to work. Once we know the exact state of a system at one moment, say the position and velocity of an apple, then we can calculate what happens at any later time, say, where the apple will fall. I talked about this in more detail in my video about differential equations, so check this out for more.

For the purposes of this video you just need to know that the idea that all the information about a person is contained in the exact configuration at one moment in time is correct. This is also true in quantum mechanics, though quantum mechanics brings in a subtlety that I will get to in a moment.

So, what happens in the transporter is “just” that you get converted into a different medium, all cell and brain processes are put on pause, and then you’re reassembled back and all those processes continue exactly as before. For you, no time has passed, you just find yourself elsewhere. At first sight it seems, Kirk doesn’t die when he goes through the transporter, it’s just a conversion.

But. There’s no reason why you have to convert the person into something else when you read out the information. You can well imagine that you just read out the information, send it elsewhere, and then build a person out of that information. And then, after you’ve done that, you blast the original person into pieces. The result is exactly the same. It’s just that now there’s a time delay between reading out the information and converting the person into something else. Suddenly it looks like Kirk dies and the person on the other end is a copy. Let’s call this the “Copy Argument”.

It might be that this isn’t possible though. For one, when we read out the exact state of a system at any one moment in time that doesn’t only tell you what this system will do in the future, it also tells you what it’s done in the past. This means, strictly speaking, the only way to copy a system elsewhere would require you to also reproduce its entire past, which isn’t possible.

However, you could say that the details of the past don’t matter. Think of a pool table. Balls are rolling around and bouncing off each other. Now imagine that at one particular moment, you record the exact positions and velocities of those balls. Then you can place other balls on another pool table at the right places and give them the correct kick. This should produce the same motion as on the original table, in principle exactly. And that’s even though the past of the copied table isn’t the same because the velocities of the balls came about differently. It’s just that this difference doesn’t matter for the motion of the balls.

Can one do the same for elementary particles? I don’t think so. But maybe you can do it for atoms, or at least for molecules, and that might be enough.

But there’s another reason you might not be able to read out the information of a person without annihilating them in that process, namely that quantum mechanics says that this isn’t possible. You just can’t copy an arbitrary quantum state exactly. However, it’s somewhat questionable whether this matters for people because quantum effects don’t seem to be hugely relevant in the human body. But if you think that those quantum effects are relevant, then you simply cannot copy the information of a person without destroying the original. So in that case the Copy Argument doesn’t work and we’re back to Kirk lives. Let’s call this the No-Copy Argument.

However… there’s another problem. The receiving side of the transporter is basically a machine that builds humans out of information. Now, if you don’t have the information that makes up a particular person, it’s incredibly unlikely you will correctly assemble them. But it’s not impossible. Indeed, if such machines are possible at all and the universe is infinitely large, or if there are other universes, then somewhere there will be a machine that will coincidentally assemble you. Even though the information was never beamed there in the first place. Indeed, this would happen infinitely often.

So you can ask what happens with Kirk in this case. He goes into the transporter, disappears. But copies of him appear elsewhere, coincidentally, even though the information of the original was never read out. You can conclude from this that it doesn’t really matter whether you actually read out the information in the first place. The No-Copy argument fails and it looks again like that the Kirk which we care about dies.

There are various ways people have tried to make sense of this conundrum. The most common one is abandoning our intuitive idea of what it means to be yourself. We have this idea that our experience is continuous and if you go into the transporter there has to be an answer to what you experience next. Do you find yourself elsewhere? Or is that the end of your story and someone else finds themselves elsewhere? It seems that there has to be a difference between these two cases. But if there is no observable difference, then this just means we’re wrong in thinking that being yourself is continuous to begin with.

The other way to deal with the problem is to take our experience seriously and conclude that there is something wrong with physics. That the information about yourself is not contained in any one particular moment. Instead, what makes you you is the entire story of all moments, or at least some stretch of time. In that case, it would be clear that if you convert a person into some other physical medium and then reassemble it, that person’s experience remains intact. Whereas if you break that person’s story in space-time apart, by blasting them away at one place and assembling a copy elsewhere, that would not result in a continuous experience.

At least for me, this seems to make intuitively more sense. But this conflicts with the laws of nature that we currently have. And human intuition is not a good guide to understanding the fundamental laws of nature, quantum mechanics is exhibit A. Philosophers by the way are evenly divided between the possible answers to the question. In a survey, about a third voted for “death” another third for “survival” and yet another third for “other”. What do you think? And did this video change your mind? Let me know in the comments.

## Saturday, October 16, 2021

### Terraforming Mars in 3 Simple Steps

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

We have made great progress screwing up the climate on this planet, so the time is right to look for a new home on which to continue the successes of the human race. What better place could there be than our neighbor planet Mars. It’s a little cold and a little dusty and it takes seven months to get there, but otherwise it’s a lovely place, and with only 3 simple steps, it can be turned into an Earthlike place or be “terraformed” as they say. Just like magic. And that’s what we’ll talk about today.

First things first, Mars is about one hundred million kilometers farther away from the Sun than Earth. Its average temperature is minus 60 degrees Celsius or minus 80 Fahrenheit. Its atmosphere is very thin and doesn’t contain oxygen. That doesn’t sound very hospitable to life as we know it, but scientists have come up with a solution for our imminent move to Mars.

We’ll start with the atmosphere, which is actually two issues namely, the atmosphere of Mars is very thin and contains basically no oxygen. Instead, it’s mostly carbon-dioxide and nitrogen.

One reason the atmosphere is so thin is that Mars is smaller than Earth and its mass is only a tenth that of Earth. That’d make for interesting Olympic games, but it also makes it easier for gas to escape. This by the way is why I strongly recommend you don’t play with your anti-gravity device. You don’t want the atmosphere of Earth to escape, do you?

But that Mars is lighter than Earth is a minor problem. The bigger problem with the atmosphere of Mars is that Mars doesn’t have a magnetic field, or at least it doesn’t have one any more. The magnetic field of a planet, like the one we have here on Earth, is important because it redirects the charged particles which the sun constantly emits, the so-called solar wind. Without that protection, the solar wind can rip off the atmosphere. That’s not good. Check out my earlier video about solar storms for more about how dangerous they can be.

That the solar wind rips off the atmosphere if the protection from the magnetic field fades away is what happened to Mars. Indeed, it’s still happening. In 2015, NASA’s MAVEN spacecraft measured the slow loss of atmosphere from Mars. They estimate it to be 100 grams per second. This constant loss is balanced by the evaporation of gas from the crust of Mars, so that the pressure has stabilized at a few milli-bar. The atmospheric pressure on the surface of earth is approximately one bar.

Therefore, before we try to create an atmosphere on Mars we first have to create a magnetic field because otherwise the atmosphere would just be wiped away again. How do you create a magnetic field for a planet? Well, physicists have understood magnetic fields two centuries ago and it’s really straight-forward.

In a paper that was just published in April in the International Journal of Astrobiology, two physicists explain that all you have to do put a superconducting wire around Mars, simple enough, isn’t it? The circle would have to have a radius of about 3400 kilometers but the diameter of the collected wires only needs to be about five centimeters. Well, okay, you need an insulation and a refrigeration system to keep it superconducting. And you need a power station to generate a current. But other than that, no fancy technology required.

That superconducting wire would have a weight of about one million tons which is only about 100 times the total weight of the Eiffel tower. The researchers propose to make it of bismuth strontium calcium copper oxide (BSCCO). Where do you get so much bismuth from? Asteroid Mining. Piece of cake.

Meanwhile on Earth. Will Cutbill from the UK earned an entry into the Guinness Book of World Records by stapling 5 M and Ms on top of each other.

Back to Mars. With the magnetic field in place, we can move to step 2 of terraforming Mars, creating an atmosphere. This can be done by releasing the remaining carbon dioxide that’s stored in frozen caps on the poles and in the rocks. In 2018, a group of American researchers published a paper in Nature in which they estimate that using the most wildly optimistic assumptions this would get us to about twenty percent of the atmospheric pressure on earth.

Leaving aside that no one knows how to release the gas, if we would release the gas this would lead to a moderate greenhouse effect. It would increase the average temperature on Mars by about 10 Kelvin to a balmy minus 50 Celsius. That still seems a little chilly, but I hear that fusion power is almost there, so I guess we can heat with that.

Meanwhile on Earth. Visitors of London can now enjoy a new tourist attraction, it’s a man-built hill of 30 meters height from which you have a great view on… construction areas.

Back to Mars. Okay, so we have a magnetic field and created some kind of atmosphere by releasing carbon-dioxide with the added benefit of increasing the average temperature by a few degrees. The remaining problem is that we can’t breathe carbon-dioxide. I mean, we can, but not for very long. So step 3 of terraforming Mars is converting carbon-dioxide to di-oxide. Only thing we need to do for this is to grow a sufficient amount of plants.

There’s the issue that plants tend to not flourish at minus fifty degrees, but that’s easy to fix with a little genetic engineering. Plants as we know them also need a range of nutrients they normally get from soil, most importantly Nitrogen, Phosphorus and Potassium. Luckily, those are present on Mars. The bigger problem may be that the soil on mars is too thin and too hard which makes it difficult for plants to grow roots. It also retains water very poorly, so you have to water the plants very often. How do you water plants at -50 degrees? Good question!

Meanwhile on Earth you can buy fake Mars soil and try your luck growing plants on it yourself!

Ok, so I admit that the last bit with the plants was a tiny bit sketchy. But there might be a better way to do it. In July 2019 researchers from JPL, Harvard and Edinburgh University published a paper in Nature in which they proposed to cover patches of Mars with a thin layer of aerogel.

An aerogel is a synthetic material which contains a lot of gas. It is super light and has an extremely low thermal conductivity, which means it could keep the surface of Mars warm. The gel would be transparent to visible light but can be somewhat opaque in the infrared, so this could create an enhanced greenhouse effect directly on the surface. That would heat up the surface, which would release more carbon dioxide. The carbon dioxide would accumulates under the gel, and then plants should be able to grow in that space. So, we’re not talking about oaks but more like algae or something that covers the ground.

In their paper, the researchers estimate that a layer of about 3 centimeters aerogel could raise the surface temperature of Mars by about 45 Kelvin. With that the average temperature on Mars would still be below the freezing point of water, but in some places it might rise above it. Sounds great! Except that the atmospheric pressure is so low that the liquid water would start boiling as soon as it melts.

So as you see our move to Mars is well on the way, better pack your bags, see you there!

## Saturday, October 09, 2021

### How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

## Saturday, October 02, 2021

### How close is nuclear fusion power?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.

There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.

The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.

The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”

Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”

So this looks like we’re close to actually creating energy from fusion right? No, wrong.

Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.

The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.

In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma.

Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.

The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.

If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.

It’s not like we are the first to point out that this is a problem. I want to read you some words from 2015 Yr Yunnan Ban Zhang Raw Pu'er tea Pu-erh Chinese Menghai Sh, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.

In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”
They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”
So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.

You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.

Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.

The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.

That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing.

If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67.

And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1.

Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”
Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”

But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:

[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.

[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
What are we to make of all this?

Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid.

There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number. This misinformation has to stop.

If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it. Thanks for watching, see you next week.

## Wednesday, September 29, 2021

### [Guest Post] Brian Keating: How to Think Like a Nobel Prize Winner

[The following is an excerpt from Think Like a Nobel Prize Winner, Brian Keating’s newest book based on his interviews with 9 Nobel Prize winning physicists. The book isn’t a physics text, nor even a memoir like Keating’s first book Losing the Nobel Prize. Instead, it’s a self-help guide for technically minded individuals seeking to ‘level-up’ their lives and careers.]

When 2017 Nobel Prize winner Barry Barish told me he had suffered from the imposter syndrome, the hair stood up on the back of my neck. I couldn’t believe that one of the most influential figures in my life and career—as a scientist, as a father, and as a human—is mortal. He sometimes feels insecure, just like I do. Every time I’m teaching, in the back of my head, I am thinking, who am I to do this? I always struggled with math, and physics never came naturally to me. I got where I am because of my passion and curiosity, not my SAT scores. Society venerates the genius. Maybe that’s you, but it’s certainly not me.

I’ve always suffered from the imposter syndrome. Discovering that Barish did too, even after winning a Nobel Prize—the highest regard in our field and in society itself—immensely comforted me. If he was insecure about how he compared to Einstein, I wanted to comfort him: Ein- stein was in awe of Isaac Newton, saying Newton “... determined the course of Western thought, research, and practice like no one else before or since.” And compared to whom did Newton feel inadequate? Jesus Christ almighty!

The truth is, the imposter syndrome is just a normal, even healthy, dose of inadequacy. As such, we can never overcome or defeat it, nor should we try to. But we can manage it through understanding and acceptance. Hearing about Barry’s experience allowed me to do exactly that, and I hoped sharing that message would also help others manage better. This was the moment I decided to create this book.

This isn’t a physics book. These pages are not for aspir- ing Nobel Prize winners, mathematicians, or any of my fellow geeks, dweebs, or nerds. In fact, I wrote it specifically for nonscientists—for those who, because of the quotidian demands of everyday life, sometimes lose sight of the biggest-picture topics humans are capable of learning about and contributing to. Most of all, I hope by humanizing science, by showing the craft of science as performed by its master practitioners, you my reader will see common themes emerge that will boost your creativity, stoke your imagination, and most of all, help overcome barriers like the imposter syndrome, thereby unlocking your full potential for out-of-this-universe success.

Though I didn’t write it for physicists, it’s appropriate to consider why the subjects of this book—who are all physicists—are good role models. Physicists are mental Swiss Army knives, or a cerebral SEAL Team Six. We dwell in uncertainty. We exist to solve problems.

We are not the best mathematicians (just ask a real mathematician). We’re not the best engineers. We also aren’t the best writers, speakers, or communicators—but no single group can simultaneously do all of these disparate tasks so well as the physicists I’ve compiled here. That’s what makes them worth listening to and learning from. I sure have.

The individuals in this book have balanced collaboration with competition. All scientists stand on the proverbial shoulders of giants of the past and present. Yet some of the most profound moments of inspiration do breathe magic into the equation of a single individual one unique time. There is a skill to know when to listen and when to talk, for you can’t do both at the same time. These scientists have navigated the challenging waters between focus and diversity, balancing intellectual breadth with depth, which are challenges we all face. Whether you’re a scientist or a salesman, you must “niche down” to solve problems. (Imagine trying to sell every car model made!)

I wrote this book for everyone who struggles to balance the mundane with the sublime—who is attending to the day-to-day hard work and labor of whatever craft they are in while also trying to achieve something greater in their profession or in life. I wanted to deconstruct the mental habits and tactics of some of society’s best and brightest minds in order to share their wisdom with readers—and also to show readers that they’re just like us. They struggle with compromise. They wrestle with perfection. And they aspire always to do something great. We can too.

By studying the habits and tactics of the world’s brightest, you can recognize common themes that apply to your life— even if the subject matter itself is as far removed from your daily life as a black hole is from a quark. Honestly, even though I am a physicist, the work done by most of the subjects in this book is no more similar to my daily work than it is to yours, and yet I learned much from them about issues common between us. These pages include enduring life lessons applicable to anyone eager to acquire new the true keys to success!

HOW IT ALL BEGAN

A theme pops up throughout these interviews regarding the connection between teaching and learning. In the Russian language, the word for “scientist” translates into “one who was taught.” That is an awesome responsibility with many implications. If we were taught, we have an obligation to teach. But the paradox is this: To be a good teacher, you must also be a good student. You must study how people learn in order to teach effectively. And to learn, you must not only study but also teach. In that way, I also have a selfish motivation behind this book: I wanted to share everything I learned from these laureates in order to learn it even more durably. Mostly, however, I see this book as an extension of my duty as an educator. That’s also how the podcast Into the Impossible began.

I’ve always had an insatiable curiosity about learning and education, combined with the recognition that life is short and I want to extract as much wisdom as I can while I can.

As a college professor, I think of teachers as shortcuts in this endeavor. Teachers act as a sort of hack to reduce the amount of time otherwise required to learn something on one’s own, compressing and making the learning process as efficient as possible—but no more so. In other words, there is a value in wrestling with material that cannot be hacked away.

As part of my duty as an educator, I wanted to cultivate a collection of dream faculty comprised of minds I wish I had encountered in my life. The next best thing to having them as my actual teachers is to learn from their interviews in a way that distills their knowledge, philosophy, struggles, tactics, and habits.

I started doing just that at UC San Diego in 2018 and realized I was extremely privileged to have access to some of the greatest minds in human history, ranging from Pulitzer Prize winners and authors to CEOs, artists, and astronauts. As the codirector of the Arthur C. Clarke Center for Human Imagination, I had access to a wide variety of writers, thinkers, and inventors from all walks of life, courtesy of our guest-speaker series. The list of invited speakers is not at all limited to the sciences. The common denominator is conversations about human curi- osity, imagination, and communication from a variety of vantage points.

I realized it would be a missed opportunity if only those people who attended our live events benefited from these world-class intellects. So we supplemented their visit- ing lectures with podcast interviews, during which we explored topics in more detail. I started referring to the podcast as the “university I wish I’d attended where you can wear your pajamas and don’t incur student-loan debt.”

The goal of the podcast is to interview the greatest minds for the greatest number of people. My very first guest was the esteemed physicist Freeman Dyson. I next inter- viewed science-fiction authors, such as Andy Weir and Kim Stanley Robinson; poets and artists, including Herbert Sigüenza and Ray Armentrout; astronauts, such as Jessica Meir and Nicole Stott; and many others. Along the way, I also started to collect a curated subset of interviews with Nobel Prize–winning physicists.

Then in February 2020, my friend Freeman Dyson died. Dyson was the prototype of a truly overlooked Nobel laureate. His contributions to our understanding of the fundamentals of matter and energy cannot be overstated, yet he was bypassed for the Nobel Prize he surely deserved. I was honored to host him for his winter visits to enjoy La Jolla’s sublime weather.

Freeman’s passing lent an incredible sense of urgency to my pursuits, forcing me to acknowledge that most prize- winning physicists are getting on in years. I don’t know how to say this any other way, but I started to feel sick to my stomach, thinking that I might miss an opportunity to talk to some of the most brilliant minds in history who, because of winning the Nobel Prize, have had an outsized influence on society and culture. So in 2020, I started reaching out to them. Most said yes, although sadly, both of the living female Nobel laureate physicists declined to be interviewed. I’m incredibly disappointed not to have female voices in this book, but it’s due to the reality of the situation and not for lack of trying.

A year later, I had this incredible collection of legacy interviews with some of the most celebrated minds on the planet. T.S. Eliot once said, “The Nobel is a ticket to one’s own funeral. No one has ever done anything after he got it.” No one proves that idea more wrong than the physicists in this book. It’s a rarefied group of individuals to learn from—especially when the focus is on life lessons instead of their research. It would be a dereliction of my intellectual duty not to preserve and share them.

HOW TO APPROACH THIS BOOK

These chapters are not transcripts. From the lengthy interviews I conducted with each laureate, I pulled all of the bits exemplifying traits worthy of emulation. Then, after each exchange, I added context or shared how I have been affected by that quote or idea. I have also edited for clarity, since spoken communication doesn’t always translate directly to the page.

All in all, I have done my best to maintain the authenticity of my exchanges with my guests. For example, you’ll notice that my questions don’t always relate to the take-away. Conversations often go in unexpected directions. I could’ve rephrased the questions for this book so they more accurately represented the laureates’ responses, but I didn’t want to misrepresent context. Still, any mistakes accidentally introduced are definitely mine, not theirs.

Each chapter contains a small box briefly explaining the laureate’s Prize-winning work—not because there will be a test at the end, but because it’s interesting context, and further, I know a lot of my readers will want to learn a bit of the fascinating science in these pages, consider- ing the folks from whom you’ll be learning. Perhaps their work will ignite further curiosity in you. If that’s not you, feel free to skip these boxes. If you’re looking for more, I refer you to the laureates’ Nobel lectures at nobelprize.org. There, you will find their knowledge. But here, you will find examples of their wisdom—distilled and compressed into concentrated, actionable form.

Each interview ends with a handful of lightning-round questions designed to investigate more deeply, to provide you with insight into what these laureates are like as human beings. Often these questions reoccur.

Further, you’ll find several recurrent themes from interview to interview, including the power of curiosity, the importance of listening to your critics, and why it’s paramount to pursue goals that are “useless.” I truly hope you’ll enjoy going out of this Universe and the benefits it will accrue to your life and career!

## Saturday, September 25, 2021

### Where did the Big Bang happen?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

The universe started with a Big Bang and it’s expanded ever since. You probably know this. You probably also know that the universe doesn’t have a center. But where did the big bang happen, if not in the center of the universe? And if the universe expands, doesn’t that mean that matter on the average doesn’t move, contrary to what Einstein said, that absolute rest doesn’t exist? I get these questions a lot. And at the end of this video, you’ll know the answers.

First of all, what’s the Big Bang? The Big Bang, The Big Bang is what you get if you take Einstein’s equations and extrapolate the present state of the universe back in time. The universe presently expands, so if you go back in time it contracts, and the matter in it becomes more and more compressed. The equations say that when you’ve gone back about thirteen point seven billion years you run into a singularity at which the density of matter must have been infinitely large. This moment is what we call the “Big Bang”.

There are two warnings I have to add when it comes to the “Big Bang”. First, I don’t know anybody who actually believes that this singularity is physically real. It probably just means that Einstein’s equations break down and must be replaced by something else. For this reason, physicists use the term “Big Bang” to refer to whatever it is that replaces the singularity to within a Planck time or so. A Planck time is about ten to the minus forty-four seconds.

Second, we don’t actually know that this extrapolation all the way back to the Big Bang is correct because we have no observations dating back to before roughly the creation of atomic nuclei. It could be that Einstein’s equations actually aren’t the right ones for the very early universe. So instead of a Big Bang it could also be that an earlier universe collapsed and then expanded again which is called a Big Bounce. Or there could have been an infinitely long time in which not much happened after which expansion suddenly began. That would also look much like a big bang. We just don’t know which one’s right. The “Big Bang” is just the simplest scenario you get when you naively extrapolate the equations back in time.

But if the Big Bang did happen, where did it happen? It seems that if the universe expands, it must have come out of some place, right? Well, no. Like so many popular science confusions, this one is created by the attempt to visualize what can’t be visualized.

To begin with, as I explained in an earlier video, the universe doesn’t expand into anything. So the image of an inflating balloon is very misleading. When we say that the universe expands, we’re talking about what happens inside the universe.

Therefore, that the universe expands is not a statement about the size of the universe as a whole. That wouldn’t make sense because in Einstein’s theory, the universe is infinitely large. It is infinitely large now and has always been infinitely large. That the universe expands means that the distances between locations in the universe increase. And that can happen even though the size is infinite.

Suppose you have an elastic strap with buttons on it, and imagine the strap is space and the buttons are galaxy clusters. If you stretch the strap, the distances between the buttons increase. That’s what it means for the universe to expand. It’s the intergalactic space that expands. Now just imagine the strap is 3-dimensional and infinitely large.

Okay, easier said than done, I know, but this is how the mathematics works. If you go back in time to the Big Bang, all distances, areas, and volumes go to zero. But this happens at every point in space. And the size of the universe is still infinite. How can the size of the universe possibly be infinite if all distances go to zero? Well, have a look at this line. That’s a stretch of the real numbers from zero to 1. That’s a set of infinitely many points, each of which has size zero. And yet the line doesn’t have length zero. Infinity is weird. If you add up infinitely many zeros you can get anything, including infinity. I talked more about infinity in an earlier video.

But in all honesty, I also find it somewhat hard to interpret the Big Bang in terms of distances. That’s why I prefer to think of it as the moment when the density of matter in the universe goes to infinity – everywhere.

But wait, didn’t you hear someone say that the universe was the size of a grapefruit at the Big Bang? They were referring only to the part of the universe that we can see today. The part that we can see has a finite size because light had only those 13.7 billion years to travel, so anything farther away from us than that, we can’t see it. We are in the middle of the part we can see just because light travels the same in all directions. The mass in the visible part of the universe is finite. And, yes, if there really was a Big Bang then all that mass was once compressed into a volume similar to that of a grapefruit or really whatever fruit you want. But the Big Bang still happened everywhere in that grapefruit.

Okay, but that brings up another problem. If the universe expands the same everywhere, then doesn’t this define a frame of absolute rest. Think back of that elastic band again. If you sit on one of the buttons, then you move “with the expansion of the universe” in some sense. It seems fair to say that this would correspond to zero velocity. But didn’t Einstein say that velocities are relative, and that you’re not supposed to talk about absolute velocities. I mean, that’s why it’s called “relativity” right? Well, yes and no.

If you remember, Einstein really had two theories, first special relativity and then general relativity. Special relativity is the theory in which there is no such thing as absolute rest and you can only talk about relative velocities. But this theory does not contain gravity, which Einstein described as the curvature of space and time. If you want to describe gravity and the expansion of the universe, then you need to use general relativity.

In general relativity, matter, or all kinds of energy really, affect the geometry of space and time. And so, in the presence of matter the universe indeed gets a preferred direction of expansion. And you can be in rest with the universe. This state of rest is usually called the “co-moving frame”, so that’s the reference frame that moves with the universe. This doesn’t disagree with Einstein at all.

What is the co-moving frame of the universe? It’s normally assumed to be the same as the rest frame of the cosmic microwave background, or at least very similar to it. So what you can do is you measure the radiation of the cosmic microwave background that is coming at us from all directions. If we were in rest with the cosmic microwave background, the energy in that radiation should be the same in all directions. This isn’t the case though, instead we see that the radiation has somewhat more energy in one particular direction and less energy in the exact opposite direction. This can be attributed to our motion through the restframe of the universe.

How fast do we move? Well, we move in many ways, because the earth is spinning and orbiting around the sun which is orbiting around the center of the milky way. So really our direction constantly changes. But the Milky Way itself moves at about 630 kilometers per second relative to the cosmic microwave background. That’s about a million miles per hour. Where are we going? We’re moving towards something called “the great attractor” and no one has any idea what that is or why we’re going there.

## Saturday, September 18, 2021

### The physics anomaly no one talks about: What’s up with those neutrinos?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

In the past months we’ve talked a lot about topics that receive more attention than they deserve. Today I want to talk about a topic that doesn’t receive the attention it deserves. That’s a 20 years old anomaly in neutrino physics which has been above the discovery threshold since 2018, but chances are you’ve never even heard of it. So what are neutrinos, what’s going on with them, and what does it mean? That’s what we’ll talk about today.

I really don’t understand why some science results make headlines and others don’t. For example, we’ve seen loads of headlines about the anomaly in the measurement of the muon g-2 and the lepton anomaly at the Large Hadron Collider. In both of these cases the observations don’t agree with the prediction but neither is statistically significant enough to count as a new discovery, and in both cases there are reasons to doubt it’s actually new physics.

But in 2018, the MiniBooNE neutrino experiment at Fermilab confirmed an earlier anomaly from an experiment called LSND at the Los Alamos National Laboratory. The statistical significance of that anomaly is now at 6 σ. And in this case it’s really difficult to find an explanation that does not involve new physics. So why didn’t this make big headlines? I don’t know. Maybe people just don’t like neutrinos?

But there are lots of reasons to like neutrinos. Neutrinos are elementary particles in the standard model of particle physics. That they are elementary means they aren’t made of anything else, at least not for all we currently know. In the standard model, we have three neutrinos. Each of them is a partner-particle of a charged lepton. The charged leptons are the electron, muon, and tau. So we have an electron-neutrino, a muon-neutrino, and a tau-neutrino. Physicists call the types of neutrinos the neutrino “flavor”. The standard model neutrinos each have a flavor, have spin ½ and no electric charge.

So far, so boring. But neutrinos are decidedly weird for a number of reasons. First, they are the only particles that interact only with the weak nuclear force. All the other particles we know either interact with the electromagnetic force or the strong nuclear force or both. And the weak nuclear force is weak. Which is why neutrinos rarely interact with anything at all. They mostly just pass through matter without leaving a trace. This is why they are often called “ghostly”. While you’ve listened to this sentence about 10 to the fifteen neutrinos have passed through you.

This isn’t the only reason neutrinos are weird. What’s even weirder is that the three types of neutrino-flavors mix into each other. That means, if you start with, say, only electron-neutrinos, they’ll convert into muon-neutrinos as they travel. And then they’ll convert back into electron neutrinos. So, depending on what distance from a source you make a measurement, you’ll get more electron neutrinos or more muon neutrinos. Crazy! But it’s true. We have a lot of evidence that this actually happens and indeed a Nobel Prize was awarded for this in 2015.

Now, to be fair, neutrino-mixing in and by itself isn’t all that weird. Indeed, quarks also do this mixing, it’s just that they don’t mix as much. That *neutrinos mix is weird because neutrinos can only mix if they have masses. But we don’t know how they get masses.

You see the way that other elementary particles get masses is that they couple to the Higgs-boson. But the way this works is that we need a left-handed and a right-handed version of the particle, and the Higgs needs to couple to both of them together. That works for all particles *except the neutrinos”. Because no one has ever seen a right-handed neutrino, we only ever measure left-handed ones. So, the neutrinos mix, which means they must have masses, but we don’t know how they get these masses.

There are two ways to fix this problem. Either the right-handed neutrinos exist but are very heavy, so we haven’t seen them yet because creating them would take a lot of energy. Or the neutrinos are different from all the other spin ½ particles in that their left- and right-handed versions are just the same. This is called a Majorana particle. But either way, something is missing from our understanding of neutrinos.

And the weirdest bit is the anomaly that I mentioned. As I said we have three flavors of neutrinos and these mix into each other as they travel. This has been confirmed by a large number of observations on neutrinos from different sources. There are natural sources like the sun, and neutrinos that are created in the upper atmosphere when cosmic rays hit. And then there are neutrinos from manmade sources, particle accelerators and nuclear power plants. In all of these cases, you know how many neutrinos are created of which type at what energy. And then after some distance you measure them and see what you get.

What physicists then do is that they try to find parameters for the neutrino mixing that fit to all the data. This is called a global fit and you can look up the current status online. The parameters you need to fit are the differences in masses which determines the wavelength of the mixing and the mixing angles, that determine how much the neutrinos mix.

By 2005 or so physicists had pretty much pinned down all the parameters. Except. There was one experiment which didn’t make sense. That was the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98. The LSND data just wouldn’t fit together with all the other data. It’s normally just excluded from the global fit.

In this figure, you see the LSND results from back then. The red and green is what you expect. The dots with the crosses are the data. The blue is the fit to the data. This excess has a statistical significance of 3.8 \sigma. As a quick reminder, 1 \sigma is a standard deviation. The more sigmas away from the expectation the data is the less likely the deviation is to have come about coincidentally. So, the more \sigma, the more impressive the anomaly. In particle physics, the discovery threshold is 5 \sigma. The 3.8 sigma of the LSND anomaly wasn’t enough to get excited, but too much to just ignore.

15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced. That’s because this experiment was a little different from the others for several reasons. They detected only anti-neutrinos created by a particle accelerator and the experiment had a very short baseline of only 30 meters, shorter than all the other experiments.

Still, a new experiment was commissioned to check on this. This was the MiniBooNE experiment at Fermilab. That’s the Mini Booster Neutrino Experiment and it’s been running since 2003. As you can tell by then the trend of cooking up funky acronyms had taken hold in physics. MiniBooNE is basically a big tank full of mineral oil surrounded with photo-detectors which you see in this photo. The tank waits for neutrinos from the nearby Booster accelerator, which you see in this photo.

For the first data analysis in 2007, MiniBoone didn’t have a lot of data and the result seemed to disagree with LSND. This was what everyone expected. Look at this headline from 2007 for example. But then in 2018 with more data MiniBooNE confirmed the LSND result. Yes, you heard that right. They confirmed it with 4.7 σ, and the combined significance is 6 σ.

What does that mean? You can’t fit this observation by tweaking the other neutrino mixing parameters. There just aren’t sufficiently many parameters to tweak. The observations is just incompatible with the standard model. So you have to introduce something new. Some ideas that physicists have put forward are symmetry violations, or new neutrino-interactions that aren’t in the standard model. There is also of course still the possibility that physicists misunderstand something about the experiment itself, but given that this is an independent reproduction of an earlier experiment, I find this unlikely. The most popular idea, which is also the easiest, is what’s called “sterile neutrinos”.

A sterile neutrino is one that doesn’t have a lepton associated with it, it doesn’t have a flavor. So we wouldn’t have seen it produced in particle collisions. Sterile neutrinos can however still mix into the other neutrinos. Indeed, that would be the only way sterile neutrinos could interact with the standard model particles, and so the only way we can measure them. One sterile neutrino alone doesn’t explain the MiniBooNE/LSND data though. You need at least two or more, or something else in addition. Interestingly enough, sterile neutrinos could also make up dark matter.

When will we find out. Indeed, seeing that the result is from 2018, why don’t we know already. Well, it’s because neutrinos… interact very rarely. This means it takes a really long time to detect sufficiently many of them to come to any conclusions.

Just to give you an idea, the MiniBooNe experiment collected data from two thousand and two to two thousand and seventeen. During that time they saw an excess of about five hundred events. 500 events in 15 years. So I think we’re onto something here. But glaciers now move faster than particle physics.

This isn’t a mystery that will resolve quickly but I’ll keep you up to date, so don’t forget to subscribe.