Basic Concepts: Energy

Having talked about force and fields, it seems fairly natural to move on to talking about energy, next. Of course, it also would’ve made sense to talk about energy first, and then fields and forces. These are interlocking concepts.

A concise one-sentence definition of energy might go something like:

the energy content of an object is a measure of its ability to change its own motion, or the motion of another object.

That’s a little longer than the previous one-sentence descriptions, but I’m trying to avoid the recursion effect of the usual one-sentence definition, “Energy is ability to do work,” which then requires a definition of work, which then requires a definition of force, and pretty soon you’re playing six degrees of Wikipedia. Anyway, I think that the above captures the essence of energy without introducing new words requiring definition.

An object can have energy because it is moving, and it can have energy because it is stationary in a place where some interaction is likely to cause it to move. Massive objects have energy simply by virtue of having mass, and objects at finite temperature have energy because of thermal fluctuations. All of these forms of energy can be used to set a stationary object into motion, or to stop or deflect an object that is moving.

There are two basic types of energy associated with objects that are already in motion. The first is kinetic energy, which is the energy contained in an object with some mass either moving through space, or sitting in one place and spinning. A baseball flying through the air has kinetic energy, and the spinning wheel of a stationary bike has rotational kinetic energy.

There’s also energy associated with the random motion of the particles making up a macroscopic object, generally referred to as thermal energy or possibly heat energy (which is arguably redundant, but whatever. This is really the sum of all of the kinetic energy of the individual atoms and molecules making up the object, but it behaves a little differently than the kinetic energy associated with an object whose center of mass is moving through space, so it gets a different name. It may not seem obvious that thermal energy can be used to change the motion of objects, but that’s essentially how a steam generating plant works– the thermal energy in a boiler converts water to steam, which turns a turbine, and generates electricity. Thermal energy turns into kinetic energy, which turns into electrical energy.

Objects that are not actually moving can have the potential to start moving, which we describe in terms of potential energy. A heavy object on a high shelf has potential energy: it’s not actually moving, but it has the potential to acquire a substantial amount of kinetic energy when you bump the shelf and it falls on your foot. Two charged objects held close to one another have potential energy: when you release them, they’ll either rush together, or fly apart, depending on what their charges are.

Energy is an essential concept in physics because of energy conservation, which does not mean turning off the heat when you’re not at home. The law of conservation of energy says that the total energy (all forms) of a system of interacting objects is a constant, provided that nothing outside the system under consideration has a significant interaction with the objects in the system. The energy can shift from one form to another, but is neither created nor destroyed.

For example, the total energy of a system consisting of a mass hanging at the end of a string and the Earth exerting a gravitational force on the mass is a constant. As the mass swings back and forth as a pendulum, the energy changes from potential to kinetic and back, but the total remains the same. If I reach in and push the mass, though, that’s an interaction from outside the system, and the energy of the pendulum-Earth system changes as a result (the energy of the larger pendulum-Earth-me system stays the same, though– the energy I add to the motion of the pendulum comes out of the chemical energy stored in the katsudon I had for dinner).

Conservation of energy is one of the most important tools in physics because it can be used to reduce complicated physical situations to exercises in bookkeeping. If you know the total energy at the beginning of some problem, and can determine the energy of some of the objects at the end of the problem, you can find the rest of the energy by just subtracting the bit you know from the total you started with. A colleague of mine likes to make an analogy between energy and money, and I’ve stolen this from him for some intro classes: kinetic energy is like money in your pocket, potential energy is like money in the bank, and thermal energy is like money that you’ve spent. If you can balance a checkbook, you can do conservation of energy problems.

Conservation of energy is a bedrock principle of the universe. In a very fundamental way, it’s the result of the fact that the laws of physics are invariant in time– that is, that they’re the same now as they were in the past, and will be in the future. This is an idea called “Noether’s Theorem,” and is beyond the scope of this discussion. Suffice to say that no matter what scale of physics you’re working on, from subatomic to cosmological, energy will be conserved (albeit in an average sense, for certain classes of quantum problems, about which more later).

As a result, physicists like to talk about everything in terms of energy. We can describe forces in terms of the potential energy due to an interaction between two objects, and in this picture the force on a particle is represented by the tendency of objects to move toward points of lower potential energy (which you can find with a fairly trivial application of vector calculus). Above a certain fairly basic level of physics, most problems are described by writing down potential energy functions, or sketching potential energy curves– in fact, that’s really the only way to talk about interactions in quantum mechanics at the level where you’re dealing with whole atoms and molecules (when you get down to the real details of quantum systems at extremely low scales, you sort of abandon the potential energy picture again– if you’re talking in terms of the exchange of force carrying bosons, there’s no longer a potential energy to write down, but this, again, is beyond the scope of this discussion).

The other slippery thing about energy is its equivalence to mass, expressed through Einstein’s famous equation:

E = mc2

Basically, when you start trying to work out the energy of an object moving at relativistic speeds, you find that it depends on the mass of the object (which is not surprising), and that it doesn’t go to zero as the speed goes to zero (which is). Instead, you find that a stationary object has an energy equal to its mass times the speed of light squared, which is a really huge number. We don’t really notice this under ordinary conditions, because the rest energy just provides a sort of constant offset, and we mostly look at changes in energy. Going from zero to one joules of kinetic energy has the same dynamical effect as going from one trillion joules of rest energy to one trillion joules of rest energy plus one joule of kinetic energy, so we mostly just ignore the rest energy in everyday circumstances.

The equivalence of mass and energy has some interesting consequences, though. Small amounts of mass can be turned into large amounts of energy, which provides the basis for nuclear fusion– two protons and two neutrons have a tiny bit more mass than the nucleus of one helium atom. When they fuse together to make a helium nucleus in the core of a star, that extra mass is converted into energy, which keeps the star hot, and provides heat and light for any planets in the immediate neighborhood. The amount of mass converted into energy in a single fusion reaction is pretty small, but there are an awful lot of protons in the Sun, and if you add together the energy of enough fusion reactions you get, well, solar flares a hundred times the size of the Earth.

The equivalence of mass and energy is also what lets us do particle physics. Not only can you take mass and convert it to energy, but you can take energy and convert it to mass. If you take a spring, and compress it in your hands, the mass of that spring increases by an infinitesimal amount due to the potential energy you’ve added to the spring. That’s not terribly interesting, or even detectable, but if you start with smaller masses, you can put this to use.

If you take a subatomic particle– a proton, say– and accelerate it to very high speeds, close to the speed of light, it acquires a large amount of kinetic energy. If that proton then collides with another particle– an antiproton headed in the other direction, for example, that energy can be converted into matter, in the form of particles and anti-particles of all different types. Conservation of energy just tells us that the total energy of the proton-antiproton system has to remain the same– the initial kinetic energy can be converted into any other form of energy you might want, and mass is just a lumpy sort of energy. There are some rules– the mass tends to come in the form of particles paired with anti-particles, which can annihilate with one another and return to energy (in the form of high-energy photons, usually)– but the whole slew of mesons and baryons and leptons that you hear particle types nattering on about can be created in accelerator experiments, given enough energy.

This is why high-energy physicists are always looking for bigger colliders, too. The larger the collider, the more kinetic energy is given to the particles being accelerated, and the higher the mass of the particles that can be created during the collision. Increasing the energy of a particle accelerator increases the number of possible particles it can make and detect, and opens the way to new physics.

The other cool thing about energy and energy conservation is that energy isn’t always conserved. Conservation of energy can be violated, as long as the violation doesn’t last very long. This is expressed in the energy-time uncertainty relation, which is the equation on the right in the banner to this blog.

The easiest way to understand energy-time uncertainty is to think about the energy carried by the electromagnetic field. We know from Planck and Einstein that the energy of a photon is determined by the oscillation frequency of the field associated with that photon– higher frequencies have more energy. If you want to measure the energy of a light field, then, what you’re really trying to do is to measure the frequency of oscillation of that field, and the best way to do that is to measure the time required for some large number of oscillations. The more oscillations you measure, the smaller the uncertainty in the frequency, and thus the energy. But then, the more oscillations you measure, the more time you spend doing the measurement, and the greater the uncertainty in exactly when you can be said to have made that measurement. If you do a fast measurement, you get a large uncertainty in the energy, and if you do a slow measurement, you get a low uncertainty in the energy.

At the quantum scale, this leads to the idea of “virtual particles.” Particle-antiparticle pairs can pop into existence out of nowhere, as long as they go away very quickly– in a time less than Planck’s constant divided by the rest energy (give or take). An electron-positron pair can pop into existence for 10-20 seconds or so, a proton-antiproton pair for about 10-23 seconds, and a bunny-antibunny pair a whole lot less than that.

This might seem like an interesting curiosity with no practical consequences, given how short these times are, but that’s not the case. It means that, in a certain sense, empty space is actually positively boiling with particles popping in and out of existence. No one pair sticks around for all that long, but the instants during which they exist are enough to show some effects– an electron moving along through space is constantly being tugged on by interactions with virtual particles, and these interactions change the way that the electron interacts with an electromagnetic field. Which, in turn, leads to a very small change in the energy levels of a hydrogen atom, which we can measure with enough precision to clearly see the effect, which is called the “Lamb Shift.”

So, not only is energy useful when it’s conserved, we can see the effects when it isn’t, even though it only lasts a very short time. Amazing stuff, energy. And that’s pretty much everything I can think of to say about it.