Football Physics and the Science of Deflate-gate

One of the cool things about working at Union is that the Communications office gets media requests looking for people to comment on current events, which sometimes get forwarded to me. Yesterday was one of those days, with a request for a scientist to comment on the bizarre sports scandal surrounding the deflated footballs used in the AFC Championship game this past weekend. Which led to me doing an experiment, and writing a short article for The Conversation:

News reports say that 11 of the 12 game balls used by the New England Patriots in their AFC championship game against the Indianapolis Colts were deflated, showing about 2 pounds per square inch (psi) less pressure than the 13 psi required by the rules, so it seems that the most bizarre sports scandal of recent memory is real. But there are still plenty of questions: why would a team deflate footballs? Could there be another explanation? And most importantly, what does physics tell us about all this?

I’ll probably extend this tomorrow with a post here containing more math and data and graphs, but I’m pretty happy with the way this came out, so go over there and read the whole thing.

Uncertain Dots 24

If you like arbitrary numerical signifiers, this is the point where we can start to talk about plural dozens of Uncertain Dots hangouts. As usual, Rhett and I chat about a wide range of stuff, including the way we always say we’re going to recruit a guest to join us, and then forget to do anything about that.

The video:

Other topics include how it’s important to rip up your class notes every so often, the pros and cons of lab handouts/ lab manuals, and of course this week’s Nobel Prize in Physics for blue LED’s (shameless self-linkage).

I’m crushingly busy right now, largely because I had no idea as of 2pm yesterday what I was going to do in class at 2pm today, and need to get that sorted. And, of course, we’re running two faculty searches with a deadline of today, which brings a certain level of chaos to everything…

Nobel Prize for Blue LEDs

The 2014 Nobel Prize in Physics has been awarded to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for the development of blue LED’s. As always, this is kind of fascinating to watch evolve in the social media sphere, because as a genuinely unexpected big science story, journalists don’t have pre-written articles based on an early copy of a embargoed paper. Which means absolutely everybody starts out using almost the exact words of the official Nobel press release, because that fills space while they frantically research the subject. Later in the day, you’ll get some different framing, once writers get their head around it, and organizations like AIP run experts out for comment. A few organizations have the advantage of old stories, like this Physics World article by new laureate Nakamura or this old Scientific American piece about Nakamura’s work.

Anyway, since I’m an optics guy by training, this is on the border of stuff I know about, so I’ll offer a little off-the-cuff physics explanation, trying not to lean heavily on the Nobel Foundation’s materials.

OK, so, this is a completely revolutionary new kind of light, or something? Well, no. The devices actually produce perfectly ordinary blue light, in the 400-500 nanometer range– there’s nothing exotic about the light. Or even the process by which it’s produced, which is pretty much the same as for a red or green LED.

Let’s pretend I don’t know how those work. Because, you know, I don’t. OK, well, it all goes back to guys like Planck and Einstein and Bohr. Quantum physics tells us that light is a stream of particles, each having a discrete energy that depends on the frequency. Short-wavelength photons like blue light have a fairly high energy, by the standards of atomic and condensed matter physics.

The emission of light generally involves an electron jumping between two allowed quantum states, with the energy difference producing a photon of the appropriate color.

Oh, so these guys just figured out how to collect a lot of atoms that emit blue light? Well, no, we already knew how to do that. It’s part of what’s going on in a fluorescent light bulb– a vapor of atoms emits light at a few different wavelengths that combine to look mostly white to our eyes. That’s not a very efficient process though, and requires a lot of vapor to make a useful amount of light. These guys used a solid-state system to produce lots of light from a tiny package. Very loosely speaking, if you’re working with a solid rather than a gas, you have thousands of times the number of electrons packed into a given volume that can put out light.

So, they just made a solid lump of atoms that emit blue light? If only it were that easy. See, when you bring lots of atoms together, the behavior of the electrons changes pretty dramatically. A single electron doesn’t have to be bound to a single atom any more, but can spread through the entire solid lump. Instead of the nice narrow states you get with atoms, you get broad “bands” of energy, with so many closely spaced energy levels that physicists stop trying to count them, and just treat it as a continuous blob. An electron could have any energy within that range.

But then, how do you get light out? Well, the bands are continuous, to a point. Within a band, an electron can easily move to a slightly higher or lower energy without emitting light, but there are “gaps” between the bands of allowed energies, a range of energies that are forbidden.

Why is that? Because of the wave nature of matter. Very loosely speaking, you can understand the concept by thinking about electrons passing through a regular crystal as a set of waves passing over a regular array of bumps. A little bit of the waves get reflected from each bump, which usually isn’t a big deal, but for some particular wavelengths, the reflected waves and the incoming electron waves interfere with each other in a way that cancels out the waves. You can’t have an electron moving through the material with that wavelength, which corresponds to a particular energy. Thus, a gap develops between otherwise continuous bands.

(This is very much a cartoon picture of what’s going on, not a rigorous definition. I think it gets the right idea across, though, without going too much into the hairy math.)

So, an electron jumping across the gap between bands makes light? Exactly. If you have an electron in a high-energy band, and an open space in a low-energy band, the electron can drop down across the gap, and in the process emit some light. The wavelength of the emitted light is determined by the band gap.

And doesn’t the gap have to do with the wavelengths the atoms would emit? Not as cleanly as you would like, no. The band gap depends on how the atoms in the material are organized into a crystal, which is a very complicated subject that I didn’t do well on in graduate school. But it’s not as simple as finding some atoms with a transition at a convenient wavelength and making them into a lump.

In order to get something that produces light of a given wavelength efficiently, you basically want to find a semiconductor material with the appropriate band gap, and make it into a diode. Which you do by butting two slightly different types of material up against each other (generally the same semiconducting material, with fraction-of-a-percent admixtures of other atoms “doping” the semiconductor). At the interface region, electron from one side drop across the band gap filling holes on the other side, and this process can be controlled very precisely.

So, that’s how a red LED works? And these blue ones use some different method? No, the blue ones work the same way, just with different materials. Long-wavelength LEDs tend to use stuff like gallium arsenide doped with aluminum, while short-wavelength ones use gallium nitride doped with indium. The basic idea is the same, though: electrons come in one side, combine with holes at the boundary between materials, and emit light with a wavelength determined by the band gap.

That seems like a pretty small change, dude. How is that worth a Nobel? Well, because it’s really frickin’ hard to do. Gallium nitride is a more difficult material to work with than gallium arsenide (despite not being quite as toxic). Making LED’s from gallium arsenide requires relatively small tweaks from the techniques used to make silicon computer chips, but making gallium nitride on the required scale and purity required a whole host of new techniques. These guys beat on the problem for a long time, developing entirely new methods of depositing thin layers of gallium nitride, controlling the doping to get the necessary properties to make a diode, and producing samples with few enough defects to have a reasonably long working life.

It’s not a paradigm shift in terms of the basic physics, but it’s a ton of hard work and new technological development, and richly deserves the Nobel.

OK, I guess. But why is this so much better than just getting a gas of blue-light-emitting atoms? Well, as I said above, the vastly higher density gets you an increase in the efficiency of the light emission. And they’re just more compact– to make a light bright enough to be useful out of an atomic vapor, you need a lot of it, which is why traditional fluorescent bulbs tend to be long tubes, and CFL bulbs are those funny spiraling coils. The light-emitting region of an LED is something like a hundred microns across, about the width of a hair. You can easily build those onto tiny chips that go into laser pointers, laptop screens, and cell phones.

Which is why blue LEDs went from impossible-to-make in the mid-1990s to being absolutely everywhere by the early 2000s– the first cell phone I got in 2002 was full of the things, and you could just feel the joy of the engineers who got to stick those in.

You know, dude, you’re falling into the same framing as the journalists you were disparaging at the start of this post. Don’t you have anything other than light bulbs and display screens to offer? Well, as an AMO physics guy, the biggest benefit of this is the creation of blue/violet diode lasers (which are just a small step up from blue LED’s). Blue light used to be a gigantic pain in the ass to generate in the lab, because you pretty much had to start with infrared light and then use a non-linear material to double the frequency up into the blue range. That’s hard to do efficiently enough to get a lot of laser power.

These days, if I want blue light, say, to make a parametric down-conversion light source for producing photon pairs to use in quantum optics experiments (as my current thesis student is doing), I call up ThorLabs, and order a blue laser diode. I pop it into a diode mount, and boom! tens of milliwatts of blue light, a perfect source for downconversion experiments. It’s a game-changer, bringing what used to be really difficult experiments well within the reach of undergraduate teaching labs.

Not a planet-saving innovation, I’ll grant, but it’s a major development in the technology AMO/ quantum optics people have available to investigate the universe. So I’d be all for this prize even without the whole energy-saving, portable-display-enabling thing.

Again, this isn’t a revolutionary breakthrough in terms of the fundamental physics involved– it’s just a huge amount of hard work on the basic materials science and chemistry of semiconductor fabrication. But it’s a dramatic breakthrough in terms of practical technology, and entirely deserving of a Nobel.

Finding Extrasolar Planets with Lasers

On Twitter Sunday morning, the National Society of Black Physicsts account retweeted this:

I recognized the title as a likely reference to the use of optical frequency combs as calibration sources for spectrometry, which is awesome stuff. Unfortunately, the story at that link is less awesome than awful. It goes on at some length about the astronomy, then dispenses with the physics in two short paragraphs of joking references to scare-quoted jargon from the AMO side. The end result is less a pointer to fascinating research than an instructive example of what not to do if you’re hired to write copy about science outside your area.

The worst part of this is that now I have to take the time to do a better job of this. Which I can ill afford, but reading a description of the comb calibration process as “magical gizmo fun” leaves such a bad taste in my mouth than I can’t let it go.

I dunno, at least this gives us a chance to do some physics. I mean, we hardly ever talk any more. I miss you, man… OK, that’s a little weird. Anyway, let’s get on with this.

Fine. All right, what’s the issue here? Well, the astronomy part of that story is, from what I know, reasonably good. It’s about the exoplanet-hunting group at Yale, led by Debra Fischer, and their search for ever more Earth-like planets. Their particualr technique is the redshift method, which looks at small changes in the wavelengths of light emitted by distant stars due to the gravitational tug of an orbiting planet on the star. This was the first method used to find extrasolar planets, and it’s been used to locate and characterize dozens of planets.

So, when do they zap these planets with lasers? Well, since they’re many light-years away, never. The lasers would be used only on Earth, at the telescopes being used to hunt for the planets in the first place.

See, the shift in the spectral lines is tiny, even for a big planet– at a long-ago talk I saw about this (“long-ago” here meaning “1998”), they were talking about shifts of a quarter of a pixel on the CCD camera they were using to measure the spectrum. The sensitivity has gotten better, of course, but this is still a very demanding process. And what’s more, it demands long-term stability if you want to see planets that are Earth-like in both mass and orbit– you need to watch a star over a period of years, and know that any changes you see are due to its motion, and not drifts due to Earthbound effects.

Can’t you just compare the lines from the star to the same element here on Earth? Yes and no. In principle, that’s what they do, but there are a lot of complications. Among other things, the atoms emitting light in the stars are, well, in stars, which means they’re in a very different environment than we can easily produce here on Earth. There are a lot of other effects that can shift the lines by a little bit, and you need to worry about that stuff.

There’s also the fact that you want to know that your spectrometer is behaving nicely over the full range– that it’s not responding in a different way in different parts of the spectrum. So you need some kind of source with lines at lots of different wavelengths, as a check on that. This is traditionally done with lamps filled with a mix of gases– thorium and argon is a common one.

And what’s the problem? Well, there’s a bit of black art to the making and maintaining of these– the people who do it are amazingly good, but when you get to looking for the tiny shifts the exoplanet hunters want, and tracking them over several years, you worry that the calibration lamp will drift due to changes in the pressure, temperature, other gas leaking in, etc..

So, beaming a laser into the telescope works better? Because, like, you can watch the frequency be stable for years? Yes and no. Lasers can do better, not because you watch a single frequency for years, but because you can make a laser that produces a wide range of lines at frequencies you can measure absolutely.

Wait a minute. I thought lasers were a single color of light? Continuous lasers are pretty much monochromatic, it’s true, but a pulsed laser is actually a collection of a large number of regularly spaced frequencies, and the shorter the pulse, the wider the range. It uses the same adding-lots-of-waves physics as the Heisenberg Uncertainty Principle video I did for TED-Ed.

But why are there lots of waves? Well, a laser works by sticking something that amplifies light between two mirrors facing each other. The light bounces back and forth, getting amplified a little bit on each pass, and some of the light leaks out on each bounce, because it’s impossible to make a perfect mirror.

This leaking-out process is what produces the many different frequencies you get with a real laser. If your cavity is about half a meter long, the time to go from one mirror to the other and back will be around three nanoseconds, so every new light being produced at any given instant is being added to light that was created three nanoseconds earlier, and has been reflected. And also light from six nanoseconds earlier that’s made two passes, and nine nanoseconds earlier, etc.

OK, but how does that create multiple frequencies? It doesn’t create them, it filters out which frequency are allowed. If the frequency of the light pulse is just right, when new light waves and the reflected light waves come together, they interfere constructively– the peaks from one align with the peaks from the other, and the two waves reinforce each other. If the frequency is a little bit off, though, the waves interfere destructively– the peaks of one fill in the valleys of the other, and cancel out.

So, a laser only works at a single special frequency, like I said. No, a laser can only work at any of an infinite number of special frequencies, determined by the length of the cavity and the speed of light. These form a regularly-spaced “comb” (so called because the usual representation of the spectrum (like the figure above) is an array of spikes indicating high intensity at some frequencies and no light in between) of allowed laser modes (a very flexible jargon term that here just means “light of a particular frequency that will interfere constructively in a given laser cavity”).

If you use the right amplifier material inside your laser, it will amplify many of these modes, producing light at a wide range of different frequencies. And when you add all those frequencies together, it produces a regular train of short pulses. The time between pulses is equal to the round-trip time for light in the cavity– effectively, each pulse constructively interferes with the reflected light of previous pulses. This is called a “mode-locked” laser, because the rate at which the pulses occur and the spacing of the modes are both fixed by the length of the cavity.

The length of the pulses depends on the length and the properties of the amplifying medium. The wider the range of frequencies you amplify, the shorter the pulse, and vice versa. It turns out that if you make a laser whose pulses are only a femtosecond or so in length (that is, 0.000000000000001s), the range of frequencies spans the entire visible range of the spectrum. You can think of the pulse as the sum of millions of little lasers with slightly different frequencies that are all locked together.

Oh, and that’s what you need for a spectroscopy calibration! Right. The cool thing about these comb sources is that you can lock their frequency in an absolute sense– you compare one of the modes of the comb to the light absorbed or emitted by a particular atom, and adjust your cavity length as needed to keep that one mode at the same frequency as the atoms. This gives you a comb of frequencies whose frequencies are known as well as the frequency of your reference atoms; if you’re really clever, you use something like an atomic clock as your reference and then you know the frequency of any given mode to ridiculous precision.

This is where you cite some old blog posts about clock stuff, right? Right. Such as this cool measurement of relativistic effects with a pair of aluminum ion clocks, or this demonstration of time transfer good to eighteen or nineteen decimal places.

They don’t demand quite this level of precision for exoplanet-hunting, but even a less technically demanding comb system can give you a broad range of regularly spaced lines that can be tied to an atomic reference. This is exactly what you want to calibrate Doppler shift measurements over a wide range of frequencies. And the precision ultimately traces to the stability of atomic clocks, which are what we use to define time, and thus guaranteed to be stable unless the fine structure constant starts doing wacky things.

So, that’s what they brushed off as “magical gizmo fun”? Not quite. The specific “magical gizmo” reference was to a second step of the process, that uses the same physics. You see, a mode-locked laser made at a reasonable size will produce too many modes for a typical telescope spectrometer to resolve cleanly, so they need to get rid of some of them. They do this by using a Fabry-Perot cavity, which is just a pair of mirrors facing each other with nothing in between them– a laser minus the amplification medium.

The same bouncing-pulse physics applies to the empty cavity– only very specific frequencies will interfere constructively, and make it through to the other side of the cavity. So they set up a Fabry-Perot that only transmits light at special frequencies spaced by, say, a hundred times the spacing between laser modes. This gets you every hundredth laser mode, which is a spacing that works better in astronomical instruments.

That’s a gizmo, all right, but it doesn’t sound all that magical. Or fun, for that matter. “Fun” is a matter of personal taste, but science is, after all, magic without lies. The word “magical” should never be used to gloss over actual science content. Not even ironically. That’s what annoyed me enough about this story to write all this up.

To be fair, it did take you about 1500 words to explain all this. You can hardly expect them to devote that much space to laser physics. No, but I don’t think it’s too much to expect some explanation, rather than just dumb jokes making fun of the jargon terms. Here’s a quick attempt at something better, in approximately the same amount of space:

Fischer’s group plans to calibrate their system using an optical frequency comb, a special laser that produces ultra-fast pulses of light only a few femtoseconds in duration, containing many different frequencies. The “modes” of this laser are evenly spaced across a wide range of frequencies in the red region of the visible spectrum where the EXPRES spectrometer operates. Setting the frequency of one of these modes to the natural absorption of a particular atomic transition fixes the frequency of all the modes of the laser.

The original laser actually provides too much of a good thing– it has so many modes that they run together on the spectrometer. They fix this problem by using a Fabry-Perot etalon, a device consisting of a pair of mirrors facing each other. The light waves bouncing around between the mirrors interfere with each other, which filters out all but every Nth mode of the original laser, giving a comb with a wider spacing between modes. The end result is a regularly spaced set of lines across the XX nanometer range of interest, each line with stability comparable to an atomic clock. This is an ideal calibration source for the long-term measurements needed to pick out the tiny wobble of an Earth-like planet in an Earth-like orbit over many years of observations.

That’s not perfect, I know, but it took me half an hour to write, and it’s not insultingly dumb. With some revision (and some data to fill in the experimental parameters that the original article doesn’t see fit to give us), it could be compact but also informative.

Yeah, I see what you mean. And keep in mind, I banged this out on the basis of background knowledge only, having no contact with the actual group doing the research. You would think that a writer with access to the research group in question– and he definitely had that, because there are quotes from them earlier in the article– would be able to do better. This ought to be better, and the fact that it isn’t reflects very poorly on the writer, and on the Planetary Society for not demanding better.

I find this particularly annoying because it has this “all these big words! Optical physics is Hard!” vibe to it. It would be easy enough to do the same thing with the astronomy side, cracking wise about stellar classifications and the like, but they would never consider doing that, because that’s their business. When it comes to physics, though, they have no qualms about dropping into Barbie mode, and I find that really annoying.

Well, I’m sorry you’re annoyed, but it was nice talking physics again. Let’s not wait so long next time, ok? I’ll try, but I don’t have as much control over my schedule as I would like. I sincerely hope, though, that our next conversation originates in something more positive than flippant and lazy science writing.

——

A couple of other links: I’ve been following this stuff since 2007, because I’m really old. I also wrote about frequency combs in the Laser Smackdown in 2010.

If you want more technical and historical detail, the 2005 Nobel went to Haensch and Hall and their Nobel lectures are free to read at those links.

Women of the Arxiv

Over at FiveThirtyEight, they have a number-crunching analysis of the number of papers (co)authored by women in the arxiv preprint server, including a breakdown of first-author and last-author papers by women, which are perhaps better indicators of prestige. The key time series graph is here:

Fraction of women authors on the arxiv preprint server over time, from FivethirtyEight.
Fraction of women authors on the arxiv preprint server over time, from FivethirtyEight.

This shows a steady increase (save for a brief drop in the first couple of years, which probably ought to be discounted as the arxiv was just getting started) from a bit over 5% women in the early 90’s to a bit over 15% now. The more detailed discussion in the article is worth reading, and mostly stands on its own.

One thing, though, that I wish they had included was a reference to this graph from the American Institutes of Physics showing basically the same trend:

Fraction of Ph.D.'s in physics awarded to women, as a function of time. From the AIP Statistical Research center.
Fraction of Ph.D.’s in physics awarded to women, as a function of time. From the AIP Statistical Research center.

That’s showing the fraction of physics Ph.D.’s earned by women over the years, and rises from a bit over 10% in the early 90’s to around 20% now. The data on women in faculty positions is less complete, but shows a similar trend.

The FiveThirtyEight piece, by Emma Pierson, covers a lot of issues, but I wish they’d dealt a bit more with this change over time. Because in some ways, that tells you a lot about the underlying dynamics– if the number of papers featuring women as authors simply tracks the number of women in physics in general, that’s one thing. If it rises more slowly than you would expect from the number of women in physics, that would be saying something else, and much less positive. Absent that, it’s hard to know what to really think about the trend Pierson reports.

Of course, it’s a difficult matter to tease this out, and there’s also an issue of subfield distribution– the arxiv started out as exclusively high-energy theory, and has expanded over time to cover a lot more of physics and math, but it’s by no means complete– when I spot an interesting paper in AMO physics, there’s only about a 50% chance that I’ll be able to find an arxiv copy. That’s going to affect the pool from which they’re drawing, which affects what you would expect to see in terms of authorship.

But this kind of basic analysis is a good starting point, and it’s always nice to have more data in the discussion.

Impossible Thruster Probably Impossible

I’ve gotten a few queries about this “Impossible space drive” thing that has space enthusiasts all a-twitter. This supposedly generates thrust through the interaction of an RF cavity with a “quantum vacuum virtual plasma,” which is certainly a collection of four words that turn up in physics papers. An experiment at a NASA lab has apparently tested a couple of these gadgets, and claimed to see thrust being produced. Which has a lot of people booking tickets on the Mars mission that this supposedly enables.

Most physicists I know have reacted to this with some linear combination of “heavy sigh” and “eye roll.” The proposed mechanism doesn’t really make any sense, and more importantly, even in the free abstract for their conference talk they state that both the configuration of the device that was supposed to produce thrust and the “null” version that was not supposed to produce thrust gave basically the same result. As Tom notes, this is mind-boggling, and John Baez goes into more detail, including a link to the paper.

The paper itself is kind of a strange read, like it was put together by a committee containing a mix of responsible, hard-headed engineers and wild-eyed enthusiasts. The experimental procedure and results sections are very sober and pretty clear that this is not a meaningful test of anything, but then there’s a whole section planning missions to Mars with scaled-up versions of the technology. Which sort of suggests that this was a test run by some career engineers at the insistence of an enthusiast who’s highly-placed enough to make them do tests and write up stuff that they find kind of dubious. But that’s just speculation on my part.

The only thing I have to add to this discussion is a quick mention of why this is likely to have gone wrong. The core technique described in the report is a “torsion pendulum.” This is a technique for measuring tiny forces that dates back to the days of the singularly odd Henry Cavendish, and is still one of the principal techniques for measuring the force of gravity. The basic idea is to hang your test system from a thin wire, balanced at one end of a barbell-like arm, then do something that makes the barbell twist. The amount of twist in the wire will then tell you how much force was produced.

The basic technique has a long and distinguished history. It’s also notoriously finnicky, which is why there’s still a lot of uncertainty and debate about gravity measurements. From stuff quoted by Baez, this seems to be the first use of the NASA lab’s torsion pendulum apparatus, which is not terribly promising. There are zillions of ways this could go wrong, and you’re not going to account for all of them the first time out of the gate.

To give you an idea of what’s involved, one of the very best groups in the world at doing this sort of measurement is the “Eöt-Wash Group” at the University of Washington, whose short-range tests of Newton’s inverse-square law provide the extremely shiny photograph in the “featured image” up at the top of this post. I’ve seen numerous talks by these guys, who are awesome, and in many of them they show a photograph of the lab, which contains a big shiny vacuum chamber and set of magnetic shield at one side of the room, and a knee-high stack of lead bricks right in the middle of the floor. That’s not because some grad student got tired before getting all the lead back to the storage room– the pile is placed very deliberately to counter the gravitational attraction of a large hill behind the physics building there.

That’s the level of perturbation you need to account for when you’re doing these sorts of experiments right. Now, the Eöt-Wash crew are looking for much smaller forces than the rocket scientists in Houston, and Houston is pretty flat, anyway, so they may not need to worry about carefully placing lead bricks. But there are dozens of tiny perturbations that are really hard to sort out– the report specifically mentions vibrations caused by waves in the ocean a few miles away, and if they’re seeing that, they’re going to be bothered by a lot of other stuff. This isn’t something you’re going to sort out in the roughly one week of testing that they actually did.

So, yeah, don’t go booking yourself a ticket to Mars because of this story. It’s almost certainly an experimental error of some sort, most likely a thermal air current due to uneven heating. Which is a failure mode with a long and distinguished history– Cavendish himself noted in 1798 that an experimenter standing near the case could drive air currents that would deflect the pendulum, so he put the entire apparatus in a shed, and took his readings with a telescope. And in his final set of data, he found that he needed to account for the difference in heating and cooling rates between his metal test masses and the wood and glass of the case.

The good news is that there’s enough sober and practical content in the report to suggest that somebody there will eventually do this right. At which time the effect will probably disappear– it’s already a few orders of magnitude smaller than an earlier claim, according to the space.com story linked above. Removing air currents as an issue (which they can do, but didn’t because they were using cheap RF amplifiers that couldn’t handle vacuum) will probably wipe it out completely.

So, don’t go booking tickets to Mars. But do go look at the Eöt-Wash experiment, because they’re awesome, and check out the Physics Today story on measurements of “big G”, because it’s fascinating.

(Also, my forthcoming book has a big section on Cavendish. But that’s not out until December…)

Two Cultures of Incompressibility

Also coming to my attention during the weekend blog shutdown was this Princeton Alumni Weekly piece on the rhetoric of crisis in the humanities. Like several other authors before him, Gideon Rosen points out that there’s little numerical evidence of a real “crisis,” and that most of the cries of alarm you hear from academics these days have near-perfect matches in prior generations. The humanities have always been in crisis.

This wouldn’t be worth mentioning, but Rosen goes on to offer an attempt at an explanation of why the sense of crisis is so palpable within the humanities, an explanation based on a comparison to the sciences. Which basically serves to demonstrate that he doesn’t spend much time with scientists.

The argument is basically that scholars in the humanities have a sense that they’re in crisis because their work doesn’t get the wide notice that work in science does:

Any educated person can rattle off a list of the great achievements of science and technology in the past 50 years: the Big Bang, cloning, the Internet, etc. People who have no idea what the Higgs boson is or why it matters still can tell you that it was discovered in July 2013 by a heroic team of scientists and that the discovery reveals something deep about the universe. What does the average educated American know about the great scholarly achievements in the humanities in the past half-century? Nothing. And this is no accident.

That’s fine, as far as it goes. Science has produced some notable triumphs in the last half-century or so, and those are widely know of if not widely understood. The problem is when he continues on with his argument:

Any humanist can list dozens of groundbreaking books, and if you have the time and patience, he or she can begin to tell you why they matter. But there are profound limits on what you can learn about the humanities secondhand. Most discoveries in the humanities are about cultural objects — books, paintings, etc. More specifically, they are discoveries about the meanings of these objects, their connections to one another, and the highly specific ways in which they are valuable. And the trouble is that this sort of discovery simply cannot be conveyed in a convincing way to someone who has never wrestled with the things themselves. To choose just one example: In 2010 Princeton professor Leonard Barkan, one of the most distinguished humanists of our time, published a beautiful book about Michelangelo’s drawings. The book calls attention to the striking fact that nearly a third of these drawings contain scrawled text: from finished poems and strange fragments to shopping lists and notes to self. Barkan’s book shows beyond doubt that our experience of the drawings is deeper when the drawings and texts are read together. But if you don’t have the drawings (or the extraordinary reproductions in Barkan’s book) in front of you, what can this mean to you? A capsule summary of Barkan’s “discovery” — admittedly an odd word in this context — is like a verbal description of a food one has never tasted. The description may persuade you that there is something there worth tasting, but in the nature of the case, it cannot begin to convey the taste itself.

He then offers a second example, from his own field of philosophy, and concludes that “Like discoveries elsewhere in the humanities, discoveries in philosophy are incompressible: Their interest can only be conveyed at length by taking one’s interlocutor through the argument.”

There are two big problems with this. The first is a sneaky rhetorical jump when he moves from comparing the “great achievements of science and technology in the past 50 years” to lamenting the lack of interest in a specific Princeton professor’s art history research. Those things really aren’t comparable, unless you want to say that a recent book about Michelangelo’s shopping lists is an intellectual triumph on the same level as the Big Bang. An actual apples-to-apples comparison would be between Barkan’s neglected book and, say, Princeton physics professor Mike Romalis’s experiments on fundamental symmetries. Romalis’s work is awesome, and I suspect I esteem his group’s publications as much as Rosen does Barkan’s art book. I doubt very much, though, that you would have any more luck finding people on the street who know about Lorentz violation tests at the South Pole and why they matter than you would finding people who know about Michelangelo’s drawings and why they’re interesting.

The bigger problem, though, is with the whole notion of research as “incompressible.” I almost choked on my tea when Rosen got to the part where he talks about how to address the problem:

Problems like this do not have quick solutions. Still, some of the main steps are clear enough. First, since the value of the humanities will be always lost on people who never have worked through a poem with someone who knows what he or she is talking about, humanists have a special obligation to see to it that teachers are well trained and that school curricula incorporate serious study of the humanities. (The new “Common Core” standards are a disappointment in this regard.) Second, we must face the fact that while scientists have armies of journalists eager to popularize their work, we humanists will get nowhere unless we write books that non-experts can read with pleasure.

Ah, yes, those armies of journalists. Who are so well paid, well publicized, and well thought of in the scientific community…

In fact, it’s not at all difficult to find scientists making almost exactly the same complaints about “incompressible” research. The most common complaint from scientists about science journalism is that it’s just a bunch of dumbing-down and over-hyping of results that can only truly be appreciated if you grind through all the details. The intersection of Rosen’s piece and the whole BICEP2 business I ranted about in the previous post— which is in part an incompressibility argument– made for one of those great “Information Supercollider” moments you get in blogdom.

When scientists complain that their research is impossible to summarize and make interesting without losing its precious bodily fluids essential core, they’re wrong. It’s not easy to do, and it’s often particularly difficult for those who are closest to the research, but at the heart of every scientific experiment, there’s a simple and interesting idea.

I’m fairly certain that the same holds for scholarship in arts and literature, as well. It may not be easy, but I have a hard time believing that it’s impossible to distill humanities research down to a short, simple description. Mostly because it’s regularly done– Rosen makes a pretty good stab at it with his description of Barkan’s book (which sounds interesting; not interesting enough for me to actually seek it out and read it, but interesting in an “Oh, that’s cool” sort of way that would work in a cocktail party/ elevator pitch context). And great works of philosophy are regularly boiled down to a few pages or even a few lines, mostly in the works of later philosophers. Among the handful of non-science books I kept from my college days is a survey of ethical philosophy that was a supplemental text for a course on ethics in literature, which gives short summaries of a wide range of big-name philosophers and people working in the same general vein. It doesn’t cover all the details of, say, Kant’s various intricate arguments, but there’s enough there to get the right basic idea, in a manner that makes it seem interesting to a casual reader.

Is that kind of treatment going to convey to the average reader the full majesty of humanities scholarship? No, but remember the standard set out at the start of this: “People who have no idea what the Higgs boson is or why it matters still can tell you that it was discovered in July 2013 by a heroic team of scientists and that the discovery reveals something deep about the universe.” If that level of incomplete understanding is good enough to point at as something science has that the humanities lack, then it ought to be enough on the other side of the Two Cultures gap, as well.

Now, of course, there’s a core point to Rosen’s argument that I do agree with, namely that scholars of all sorts ought to do a better job of communicating their results to the general public. This is largely a self-inflicted wound– the cultures of incompressibility and incomprehensibility have come about because academics both inside and outside the sciences have chosen to reward narrow technical publications over broader general-interest ones. What matters for promotion and professional status is publication aimed exclusively at other scholars– a scholarly monograph that maybe a hundred other academics will read will do more to advance your career than a general-audience book that reaches thousands.

That’s a choice that we as academics– both in science and elsewhere– have made, and it’s a choice we can un-make if we really want to. It requires a fundamental shift in mindset, though, away from the notion of incompressible scholarship, to a recognition that anything one group of humans find interesting enough to be worth doing can and should be made interesting to a wide range of other humans. And that this is something worth celebrating and rewarding.

What Scientists Should Learn From Economists

Right around the time I shut things down for the long holiday weekend, the Washington Post ran this Joel Achenbach piece on mistakes in science. Achenbach’s article was prompted in part by the ongoing discussion of the significance (or lack thereof) of the BICEP2 results, which included probably the most re-shared pieces of last week in the physics blogosphere, a pair of interviews with a BICEP2 researcher and a prominent skeptic. This, in turn, led to a lot of very predictable criticism of the BICEP2 team for over-hyping their results, and a bunch of social-media handwringing about how the whole business is a disaster for Science as a whole, because if one high-profile result is wrong, that will be used to argue that everything is wrong by quacks and cranks and professional climate-change deniers.

This happens with depressing regularity, of course, and it’s pretty ridiculous. The idea that climate-change denial gains materially from something as obscure as the BICEP2 business is just silly. They don’t need real examples, let alone examples of arcane failures that even a lot of professional physicists can’t explain. We had a conversation at lunch a week or so after the initial announcement, and none of the faculty in the department could manage a good explanation of why the polarization pattern they saw would have anything to do with gravitational waves. There was some vague mumbling about how we should see if we can get somebody here to give a talk about this, and that was about it. If tenured faculty in physics and astronomy take a shrug-and-move-on approach to the whole business, it’s not likely to make much of an impression on the general public; certainly not enough to be politically useful.

People profess doubt about climate science not because of any rational evidence about the fallibility of science, but because it’s in their interests to do so. A handful of them are extremely well compensated for this belief, while for many others it’s an expression of a kind of tribal identity that brings other benefits. It’s conceivable, barely, that a “Scientists can’t even properly account for the effects of foreground dust on cosmic microwave background polarization” line might turn up in some grand litany of claims about why you can’t trust the scientific consensus, but it’s going to be wayyyyy down the list. They’re perfectly happy to run with much splashier and far stupider claims that have nothing to do with the physics of the Big Bang. (Which a non-trivial fraction of their supporters probably regard as heretical nonsense, anyway…)

I’m not even sold on the complaints about “hype,” and particularly the notion that somehow the BICEP2 results and possible implications should have been kept away from the public until after the whole peer review process had run its course. For one thing, that’s not remotely possible in the modern media environment. Even if the BICEP2 folks had refrained from talking up their result, posting a preprint to the arxiv (as is standard practice these days) would’ve triggered a huge amount of excitement anyway, because there are people out there who know what these results would mean, and they have blogs, and Twitter accounts. This isn’t something that you’re going to just slide under the radar, and if there’s going to be excitement anyway, you might as well ride it as far as it will take you.

(Really, the fact that there’s any market at all for hype about cosmology ought to be viewed as a Good Thing. It means people care enough about the field to be interested in hot-off-the-telescope preliminary results, which isn’t true of every field of science.)

And I don’t think the BICEP2 people have done anything underhanded, or behaved especially like hype-mongers. Confronted with issues concerning their data analysis, they quite properly revised their claims before the final publication. A real fraud or faker would double-down at this point, but they’re behaved in an appropriate manner throughout.

Most importantly, though, as Achenbach notes, science is a human enterprise, and is every bit as prone to error and misinterpretation as anything else hairless plains apes get up to. (In fact, as I argue at book length, this is largely because all of those enterprises use the same basic mental toolkit…)

All those other enterprises, though, seem to have come to terms with the fact that there are going to be mis-steps along the way, while scientists continue to bemoan every little thing that goes awry. And keep in mind, this is true of fields where mistakes are vastly more consequential than in cosmology. We’re only a week or so into July, so you can still hear echos of chatter about the various economic reports that come out in late June– quarterly growth numbers, mid-year financial statements, the monthly unemployment report. These are released, and for a few days suck up all the oxygen in discussion of politics and policy, often driving dramatic calls for change in one direction or another.

But here’s the most important thing about those reports: They’re all wrong. Well, nearly all– every now and then, you hit a set of figures that actually hold up, but for the most part, the economic data that are released with a huge splash every month and every quarter are wrong. They’re hastily assembled preliminary numbers, and the next set of numbers will include revisions to the previous several sets. It’s highly flawed provisional data at best, subject to revisions that not infrequently turn out to completely reverse the narrative you might’ve seen imposed on the original numbers.

Somehow, though, the entire Policy-Pundit Complex keeps chugging along. People take this stuff in stride, for the most part, and during periods when we happen to have a functional government, they use these provisional numbers more or less as they’re supposed to be used. which is what has to happen– you can’t wait until you have solid, reliable numbers from an economic perspective, because that takes around a year of revisions and updates, by which time the actual situation has probably changed. What would’ve been an appropriate policy a year ago might be completely wrong by the time the numbers are fully reliable. So if you’re in a position to make economic policy, you work with what you’ve got.

And everyone accepts this. You won’t find (many) economists bemoaning the fact that the constant revising of unemployment reports makes the profession as a whole look bad, or undercuts their reputation with the general public. They know how things work, policy-makers know how things work, and everyone gets on with what they need to do. And, yeah, every report gets some political hype, blasting the President/Congress for failing to do something or another, but every round of these stories will include at least a few comments of the form “Yeah, this looks bad, but these are preliminary numbers, and we’ll see how they look a few months from now.”

So, this is the rare case where scientists need to act more like economists. Mistakes and overhype are an inevitable part of any human-run process, and we need to stop complaining about them and get on with what we need to do. If people still trust economists after umpteen years of shifting forecasts, science will weather BICEP2 just fine.

On Black Magic in Physics

The latest in a long series of articles making me glad I don’t work in psychology was this piece about replication in the Guardian. This spins off some harsh criticism of replication studies and a call for an official policy requiring consultation with the original authors of a study that you’re attempting to replicate. The reason given is that psychology is so complicated that there’s no way to capture all the relevant details in a published methods section, so failed replications are likely to happen because some crucial detail was omitted in the follow-up study.

Predictably enough, this kind of thing leads to a lot of eye-rolling from physicists, which takes up most of the column. And, while I have some sympathy for the idea that studying human psychology is a subtle and complicated process, I also can’t help thinking that if the font in which a question is printed is sufficient to skew the result of a study one way or the other, then maybe these results aren’t really revealing deep and robust truths about the way our brains work. Rather than demanding that new studies duplicate the prior studies in every single detail, a better policy might be to require some variation of things that ought to be insignificant, to make sure that the results really do hold in a general way.

If you go to precision measurement talks in physics– and I went to a fair number at DAMOP this year, there will inevitably be a slide listing all the experimental parameters that they flipped between different values. Many of these are things that you look at and say “Well, how could that make any difference?” and that’s the point. If changing something trivial– the position of the elevator in the physics building, say– makes your signal change in a consistent way, odds are that your signal isn’t really a signal, but a weird noise effect. In which case, you have some more work to do, to track down the confounding source of noise.

Of course, that’s much easier to do in physics than psychology– physics apparatus is complicated and expensive, but once you have it, atoms are cheap and you can run your experiment over and over and over again. Human subjects, on the other hand, are a giant pain in the ass– not only do you need to do paperwork to get permission to work with them, but they’re hard to find, and many of them expect to be compensated for their time. And it’s hard to get them to come in to the lab at four in the morning so you can keep your experiment running around the clock.

This is why the standards for significance are so strikingly different between the fields– psychologists (and biomedical researchers) are thrilled to see results that are significant at the 1% level, while in many fields of physics, that’s viewed as a tantalizing hint, and a sign that much more work is required. But getting enough subjects to hit even the 3-sigma level at which physicists become guardedly optimistic would quickly push the budget for your psych experiment to LHC levels. And if you’d like those subject to come from outside the WEIRD, well…

At the same time, though, physicists shouldn’t get too carried away. From some of the quotes in that Guardian article, you’d think that experimental methods sections in physics papers are some Platonic ideal of clarity and completeness, which I find really amusing in light of a conversation I had at DAMOP. I was talking to someone I worked with many years ago, who mentioned that his lab recently started using a frequency comb to stabilize a wide range of different laser frequencies to a common reference. I asked how that was going, and he said “You know, there’s a whole lot of stuff they don’t tell you about those stupid things. They’re a lot harder to use than it sounds when you hear Jun Ye talk.”

That’s true of a lot of technologies, as anyone who’s tried to set up an experimental physics lab from scratch learns very quickly. Published procedure sections aren’t incomplete in the sense of leaving out critically important steps, but they certainly gloss over a lot of little details.

There are little quirks of particular atoms that complicate some simple processes– I struggled for a long time with getting a simple saturated absorption lock going in a krypton vapor cell, because the state I’m interested in turns out to have hellishly large problems with pressure broadening. That’s fixable, but not really published anywhere obvious– I worked it out on my own before I talked to a colleague who did the same thing, and he said “Oh, yeah, that was a pain in the ass…”

There are odd features of certain technologies that crop up– the frequency comb issue that my colleague mentioned at DAMOP was a dependence on one parameter that turns out to be sinusoidal. Which means it’s impossible to automatically stabilize, but requires regular human intervention. After asking around, he discovered that the big comb-using labs tend to have one post-doc or staff scientist whose entire job is keeping the comb tweaked up and running properly, something you wouldn’t really get from published papers or conference talks.

And there are sometimes issues with sourcing things– back in the early days of BEC experiments, the Ketterle lab pioneered a new imaging technique, which required a particular optical element. They spent a very long time tracking down a company that could make the necessary part, and once they got it, it worked brilliantly. Their published papers were scrupulously complete in terms of giving the specifications of the element in question and how it worked in their system, but they didn’t give out the name of the company that made it for them. Which meant that anybody who had the ability to make that piece had all the information they needed to do the same imaging technique, but anybody without the ability to build it in-house had to go through the same long process of tracking down the right company to get one.

So, I wouldn’t say that experimental physics is totally lacking in black magic elements, particularly in small-lab fields like AMO physics. (Experimental particle physics and astrophysics are probably a little better, as they’re sharing a single apparatus with hundreds or thousands of collaboration members.)

The difference is less in the purity of the approach to disseminating procedures than in the attitude toward the idea of replication. And, as noted above, the practicalities of working with the respective subjects. Physics experiments are susceptible to lots of external confounding factors, but working with inanimate objects makes it a lot easier to repeat the experiment enough times to run those down in a convincing way. Which, in turn, makes it a little less likely for a result that’s really just a spurious noise effect to get into the literature, and thus get to the stage where people feel that failed replications are challenging their professional standing and personal integrity.

It’s not impossible, though– there have even been retractions of particles that were claimed to be detected at the five-sigma level. And sometimes there are debates that drag on for years, and can involve some nasty personal sniping along the way.

The really interesting recent(-ish) physics case that ought to be a big part of a discussion of replication in physics and other sciences is the story of “supersolid” helium, where a new and dramatic quantum effect was claimed, then challenged in ways that led to some heated arguments. Eventually, the original discoverers “>re-did their experiments, and the effect vanished, strongly suggesting it was a noise effect all along. That’s kind of embarrassing for them, but on the other had, speaks very well to their integrity and professionalism, and is the kind of thing scientists in general ought to strive to emulate. My sense is that it’s also more the exception than the rule, even within physics.

“Earthing” Is a Bunch of Crap

A little while back, I was put in touch with a Wall Street Journal writer who was looking into a new-ish health fad called “earthing,” which involves people sleeping on special grounded mats and that sort of thing. The basis of this particular bit of quackery is the notion that spending time indoors, out of contact with the ground, allows us to pick up a net positive charge relative to the Earth, and this has negative health consequences. Walking barefoot on the ground, or sleeping on a pad that is electrically connected to ground via your house’s wiring, allows you to replace your lost electrons with electrons from the Earth, curing all manner of ills.

I’m quoted briefly in a column about this, but in preparing to talk about the physics, I drew up a more extensive list of reasons why this “earthing” business is a bunch of crap than could really fit in a single column. Luckily, I have this blog where I can post this sort of thing; thus, a collection of physics reasons why this fad is nonsense.

Of course, like most health fads, there’s a tiny grain of truth at the center of a giant ball of crap. It is, in fact, perfectly true that we can build up a potential difference between our bodies and the Earth, due to brushing against materials that tend to grab electrons. It’s also true that contact with the Earth, or with grounded conductors, will equalize the potential by allowing electrons to flow between your body and the Earth. There’s nothing particularly wrong about those two statements; it’s just, you know, everything else that follows after them. They’re true, but basically meaningless, for the following reasons, among others:

1) Electrons are electrons. The sites I looked at are full of talk about “beneficial electrons from the Earth” and that sort of thing, which is garbled nonsense. Electrons are electrons are electrons– there’s nothing that singles out or sets apart an electron from the Earth as opposed to from some synthetic material, or, for that matter, an electron that came blasting in from outer space.

How do we know this? Basically because chemistry works. The Periodic Table of the elements is set up the way it is because of the arrangement of electrons within atoms– as you increase the number of electrons in a given atom, you “fill up” energy states, with the last electron added going into a particular state that determines the binding properties of the atom in question. That “filling up” is a consequence of the Pauli Exclusion Principle, which says that no two electrons can be found in exactly the same state. The Exclusion Principle, in turn, is a consequence of the indistinguishability of electrons– two electrons aren’t allowed to occupy the same state because electrons are perfectly indistinguishable, and that puts some constraints on their properties.

If there were a difference between electrons from the Earth and from other sources, then chemistry would be a mess. Electrons that originated in the ground would “fill up” one set of states, while electrons that originated somewhere else would “fill up” a different set of states. You’d end up with carbon atoms that could only form two chemical bonds, instead of the usual four, or solids that ought to be conductors but act as insulators, and all sorts of other screwy results. We don’t see those things, which means that all electrons are truly identical to a very high degree.

2) Potential Differences Are Transient and Meaningless. Shuffle your feet across a carpet, and then touch a doorknob. Feel a spark? Congratulations, you’ve established a significant potential difference between yourself and the Earth, and then eliminated it.

It’s perfectly true that ordinary interactions with many materials will strip electrons off your body. But that never lasts all that long, as the doorknob-spark illustrates. In the process of shuffling across a carpeted floor, you lose (or gain, depending on the materials involved) several billion electrons, but as soon as you touch a metal object, you get them all back (or give them all up).

It’s simply not possible to build up and maintain a significant charge imbalance between your body and the rest of the world, because everything we interact with contains electrons, and they move back and forth between objects all the time. If nothing else, the charge on an object will eventually dissipate into the air– back when I was doing sticky tape experiments, I had to periodically recharge the tapes, because the charge goes away over time. A net positive charge will attract negative ions from the air, and eventually neutralize, and the same thing happens to your body.

3) The Potential Measurements Made by “Earthing” Advocates Are Worthless. One of the many sites out there promoting this stuff (I’m not going to dignify them with a link) suggests that you can demonstrate the severity of the problem by getting a voltmeter from Radio Shack and putting one of the leads into the ground socket of an electrical outlet in your house. Then touch the other to your body, and you’ll see a voltage reading that’s a measure of how much your potential differs from that of the Earth.

That seems very convincing, as long as you’ve never been an easily bored physics major (as if there’s any other kind). I was an easily bored physics major, though, so I’ve played with voltmeters lots of times in the past, and tried measuring the potential difference between myself and lots of things. As a result, I know that these results are gibberish.

But, just to be fair, I did what they suggested, and plugged the meter into the ground socket, then touched my thumb with the other lead. I measured a potential difference that fluctuated a bit, between about 0.03V and 0.15V. I then took the lead out of the electrical socket, and touched both leads to my left thumb, about an inch apart. Where I measured a potential difference that fluctuated between 0.03V and 0.15V. Those measurements are basically meaningless– it’s noise in the meter, fluctuating local fields, and other garbage effects.

Their literature talked about potential differences of multiple volts, which I didn’t see, but have occasionally managed in past screwing around with voltmeters. But even that is completely insignificant– if you shuffle your feet across the rug and then throw a spark touching a metal object, the potential difference between you and the metal thing was probably around 1,000V. It takes an electric field around 1,000,000 V/m to make a spark in air (give or take a factor of ten or so; the textbook we used to use had a long discussion of sparks), and a typical spark from everyday static electricity will jump around a millimeter. So, a potential difference across that gap of 1,000V will get the job done.

You build up and discharge potential differences of hundreds of volts all the time, without particularly noticing.

4) Their Own Safety Devices Undermine Their Claims. The literature I looked at reassured potential customers that there was no danger of electric shock from using their products, because the cord used to connect the “Earthing” mats to the ground of your house’s electrical system contains a 100,000 Ω resistor as a precaution. That made me bust up laughing.

Why? Because the definition of a resistor is that it resists the flow of current. Which means it will impede the flow of harmful current from a faulty appliance of some sort, true, but it will also act to impede the flow of beneficial electrons from the Earth by exactly the same amount.

How big a difference are we talking? Well, the connection between the ground plug of your electrical system and actual ground (generally either a metal spike driven into the ground, or something like a metal water pipe coming into the house) should have a resistance of a few ohms (see, for example, this discussion). So if you drop a 100,000 Ω resistor in there, you’re increasing the resistance by nearly a factor of 100,000, which reduces the rate at which electrons flow in from the Earth by the same factor.

What’s that mean? Well, in order to get the same health benefit of one second of electron flow between you and the Earth due to direct contact– standing barefoot on the ground, for example– you would need to spend 100,000 seconds in contact with their mat. 100,000 seconds is about 27 hours, a bit more than a day. Their literature talks about the health benefits of multiple hours spent “Earthing” yourself, which would require hundreds of thousands of hours on the protected mat, and 100,000 hours is over 11 years.

And that, right there, ought to be enough to, well, bury this whole silly idea. Their “safety” precaution should obliterate the effectiveness of their devices. The fact that they advertise this as a positive feature indicates just how little real physics there is at work here.